Commit Graph

40930 Commits

Author SHA1 Message Date
Peter Eisentraut ab0a23c7c9 Fix typo 2016-08-09 19:08:00 -04:00
Bruce Momjian 2abf92bc69 docs: my first pass over the 9.6 release notes 2016-08-09 18:36:18 -04:00
Tom Lane e775d35317 Doc: clarify description of CREATE/ALTER FUNCTION ... SET FROM CURRENT.
Per discussion with David Johnston.
2016-08-09 13:39:24 -04:00
Tom Lane 67c08c0d70 Stamp 9.6beta4. 2016-08-08 16:25:04 -04:00
Bruce Momjian cfdadf5f93 doc: update list of pg_trgm authors
Author: Oleg Bartunov
2016-08-08 14:02:43 -04:00
Tom Lane de4b3ea16d Update 9.6 release notes through today. 2016-08-08 13:13:05 -04:00
Tom Lane 9b8271c5a6 Last-minute updates for release notes.
Security: CVE-2016-5423, CVE-2016-5424
2016-08-08 11:56:10 -04:00
Peter Eisentraut 9a46324fd4 Fix several one-byte buffer over-reads in to_number
Several places in NUM_numpart_from_char(), which is called from the SQL
function to_number(text, text), could accidentally read one byte past
the end of the input buffer (which comes from the input text datum and
is not null-terminated).

1. One leading space character would be skipped, but there was no check
   that the input was at least one byte long.  This does not happen in
   practice, but for defensiveness, add a check anyway.

2. Commit 4a3a1e2cf apparently accidentally doubled that code that skips
   one space character (so that two spaces might be skipped), but there
   was no overflow check before skipping the second byte.  Fix by
   removing that duplicate code.

3. A logic error would allow a one-byte over-read when looking for a
   trailing sign (S) placeholder.

In each case, the extra byte cannot be read out directly, but looking at
it might cause a crash.

The third item was discovered by Piotr Stefaniak, the first two were
found and analyzed by Tom Lane and Peter Eisentraut.
2016-08-08 11:12:59 -04:00
Peter Eisentraut 34927b2920 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: cda21c1d7b160b303dc21dfe9d4169f2c8064c60
2016-08-08 11:08:00 -04:00
Tom Lane f0c7b789ab Fix two errors with nested CASE/WHEN constructs.
ExecEvalCase() tried to save a cycle or two by passing
&econtext->caseValue_isNull as the isNull argument to its sub-evaluation of
the CASE value expression.  If that subexpression itself contained a CASE,
then *isNull was an alias for econtext->caseValue_isNull within the
recursive call of ExecEvalCase(), leading to confusion about whether the
inner call's caseValue was null or not.  In the worst case this could lead
to a core dump due to dereferencing a null pointer.  Fix by not assigning
to the global variable until control comes back from the subexpression.
Also, avoid using the passed-in isNull pointer transiently for evaluation
of WHEN expressions.  (Either one of these changes would have been
sufficient to fix the known misbehavior, but it's clear now that each of
these choices was in itself dangerous coding practice and best avoided.
There do not seem to be any similar hazards elsewhere in execQual.c.)

Also, it was possible for inlining of a SQL function that implements the
equality operator used for a CASE comparison to result in one CASE
expression's CaseTestExpr node being inserted inside another CASE
expression.  This would certainly result in wrong answers since the
improperly nested CaseTestExpr would be caused to return the inner CASE's
comparison value not the outer's.  If the CASE values were of different
data types, a crash might result; moreover such situations could be abused
to allow disclosure of portions of server memory.  To fix, teach
inline_function to check for "bare" CaseTestExpr nodes in the arguments of
a function to be inlined, and avoid inlining if there are any.

Heikki Linnakangas, Michael Paquier, Tom Lane

Report: https://github.com/greenplum-db/gpdb/pull/327
Report: <4DDCEEB8.50602@enterprisedb.com>
Security: CVE-2016-5423
2016-08-08 10:33:46 -04:00
Noah Misch fcd15f1358 Obstruct shell, SQL, and conninfo injection via database and role names.
Due to simplistic quoting and confusion of database names with conninfo
strings, roles with the CREATEDB or CREATEROLE option could escalate to
superuser privileges when a superuser next ran certain maintenance
commands.  The new coding rule for PQconnectdbParams() calls, documented
at conninfo_array_parse(), is to pass expand_dbname=true and wrap
literal database names in a trivial connection string.  Escape
zero-length values in appendConnStrVal().  Back-patch to 9.1 (all
supported versions).

Nathan Bossart, Michael Paquier, and Noah Misch.  Reviewed by Peter
Eisentraut.  Reported by Nathan Bossart.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch 41f18f021a Promote pg_dumpall shell/connstr quoting functions to src/fe_utils.
Rename these newly-extern functions with terms more typical of their new
neighbors.  No functional changes; a subsequent commit will use them in
more places.  Back-patch to 9.1 (all supported versions).  Back branches
lack src/fe_utils, so instead rename the functions in place; the
subsequent commit will copy them into the other programs using them.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch bd65371851 Fix Windows shell argument quoting.
The incorrect quoting may have permitted arbitrary command execution.
At a minimum, it gave broader control over the command line to actors
supposed to have control over a single argument.  Back-patch to 9.1 (all
supported versions).

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch 142c24c234 Reject, in pg_dumpall, names containing CR or LF.
These characters prematurely terminate Windows shell command processing,
causing the shell to execute a prefix of the intended command.  The
chief alternative to rejecting these characters was to bypass the
Windows shell with CreateProcess(), but the ability to use such names
has little value.  Back-patch to 9.1 (all supported versions).

This change formally revokes support for these characters in database
names and roles names.  Don't document this; the error message is
self-explanatory, and too few users would benefit.  A future major
release may forbid creation of databases and roles so named.  For now,
check only at known weak points in pg_dumpall.  Future commits will,
without notice, reject affected names from other frontend programs.

Also extend the restriction to pg_dumpall --dbname=CONNSTR arguments and
--file arguments.  Unlike the effects on role name arguments and
database names, this does not reflect a broad policy change.  A
migration to CreateProcess() could lift these two restrictions.

Reviewed by Peter Eisentraut.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch c400717172 Field conninfo strings throughout src/bin/scripts.
These programs nominally accepted conninfo strings, but they would
proceed to use the original dbname parameter as though it were an
unadorned database name.  This caused "reindexdb dbname=foo" to issue an
SQL command that always failed, and other programs printed a conninfo
string in error messages that purported to print a database name.  Fix
both problems by using PQdb() to retrieve actual database names.
Continue to print the full conninfo string when reporting a connection
failure.  It is informative there, and if the database name is the sole
problem, the server-side error message will include the name.  Beyond
those user-visible fixes, this allows a subsequent commit to synthesize
and use conninfo strings without that implementation detail leaking into
messages.  As a side effect, the "vacuuming database" message now
appears after, not before, the connection attempt.  Back-patch to 9.1
(all supported versions).

Reviewed by Michael Paquier and Peter Eisentraut.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch 9d924e9a64 Introduce a psql "\connect -reuse-previous=on|off" option.
The decision to reuse values of parameters from a previous connection
has been based on whether the new target is a conninfo string.  Add this
means of overriding that default.  This feature arose as one component
of a fix for security vulnerabilities in pg_dump, pg_dumpall, and
pg_upgrade, so back-patch to 9.1 (all supported versions).  In 9.3 and
later, comment paragraphs that required update had already-incorrect
claims about behavior when no connection is open; fix those problems.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch 984e5beb38 Sort out paired double quotes in \connect, \password and \crosstabview.
In arguments, these meta-commands wrongly treated each pair as closing
the double quoted string.  Make the behavior match the documentation.
This is a compatibility break, but I more expect to find software with
untested reliance on the documented behavior than software reliant on
today's behavior.  Back-patch to 9.1 (all supported versions).

Reviewed by Tom Lane and Peter Eisentraut.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Peter Eisentraut a1f8b6bd14 doc: Update benchmark results
From: Alexander Law <exclusion@gmail.com>
2016-08-08 09:27:20 -04:00
Peter Eisentraut 8a56d4e361 Make format() error messages consistent again
07d25a964 changed only one occurrence.  Change the other one as well.
2016-08-08 08:15:41 -04:00
Tom Lane be7f7ee5ea Update 9.6 release notes through today. 2016-08-07 22:24:59 -04:00
Peter Eisentraut d8710f18f4 Correct column name in information schema
Although the standard has routines.result_cast_character_set_name, given
the naming of the surrounding columns, we concluded that this must have
been a mistake and that result_cast_char_set_name was intended, so
change the implementation.  The documentation was already using the new
name.

found by Clément Prévost <prevostclement@gmail.com>
2016-08-07 21:56:13 -04:00
Tom Lane 19322c0a78 Release notes for 9.5.4, 9.4.9, 9.3.14, 9.2.18, 9.1.23. 2016-08-07 21:31:37 -04:00
Peter Eisentraut 4a1f42f287 doc: Move mention of rsync of temp tables to better place 2016-08-07 21:27:36 -04:00
Tom Lane 95bee941be Fix misestimation of n_distinct for a nearly-unique column with many nulls.
If ANALYZE found no repeated non-null entries in its sample, it set the
column's stadistinct value to -1.0, intending to indicate that the entries
are all distinct.  But what this value actually means is that the number
of distinct values is 100% of the table's rowcount, and thus it was
overestimating the number of distinct values by however many nulls there
are.  This could lead to very poor selectivity estimates, as for example
in a recent report from Andreas Joseph Krogh.  We should discount the
stadistinct value by whatever we've estimated the nulls fraction to be.
(That is what will happen if we choose to use a negative stadistinct for
a column that does have repeated entries, so this code path was just
inconsistent.)

In addition to fixing the stadistinct entries stored by several different
ANALYZE code paths, adjust the logic where get_variable_numdistinct()
forces an "all distinct" estimate on the basis of finding a relevant unique
index.  Unique indexes don't reject nulls, so there's no reason to assume
that the null fraction doesn't apply.

Back-patch to all supported branches.  Back-patching is a bit of a judgment
call, but this problem seems to affect only a few users (else we'd have
identified it long ago), and it's bad enough when it does happen that
destabilizing plan choices in a worse direction seems unlikely.

Patch by me, with documentation wording suggested by Dean Rasheed

Report: <VisenaEmail.26.df42f82acae38a58.156463942b8@tc7-visena>
Discussion: <16143.1470350371@sss.pgh.pa.us>
2016-08-07 18:52:02 -04:00
Tom Lane 8a8c6b5381 Fix crash when pg_get_viewdef_name_ext() is passed a non-view relation.
Oversight in commit 976b24fb4.

Andreas Seltenreich

Report: <87y448l3ag.fsf@credativ.de>
2016-08-07 17:56:34 -04:00
Tom Lane 9ee1cf04ab Fix TOAST access failure in RETURNING queries.
Discussion of commit 3e2f3c2e4 exposed a problem that is of longer
standing: since we don't detoast data while sticking it into a portal's
holdStore for PORTAL_ONE_RETURNING and PORTAL_UTIL_SELECT queries, and we
release the query's snapshot as soon as we're done loading the holdStore,
later readout of the holdStore can do TOAST fetches against data that can
no longer be seen by any of the session's live snapshots.  This means that
a concurrent VACUUM could remove the TOAST data before we can fetch it.
Commit 3e2f3c2e4 exposed the problem by showing that sometimes we had *no*
live snapshots while fetching TOAST data, but we'd be at risk anyway.

I believe this code was all right when written, because our management of a
session's exposed xmin was such that the TOAST references were safe until
end of transaction.  But that's no longer true now that we can advance or
clear our PGXACT.xmin intra-transaction.

To fix, copy the query's snapshot during FillPortalStore() and save it in
the Portal; release it only when the portal is dropped.  This essentially
implements a policy that we must hold a relevant snapshot whenever we
access potentially-toasted data.  We had already come to that conclusion
in other places, cf commits 08e261cbc9 and ec543db77b.

I'd have liked to add a regression test case for this, but I didn't see
a way to make one that's not unreasonably bloated; it seems to require
returning a toasted value to the client, and those will be big.

In passing, improve PortalRunUtility() so that it positively verifies
that its ending PopActiveSnapshot() call will pop the expected snapshot,
removing a rather shaky assumption about which utility commands might
do their own PopActiveSnapshot().  There's no known bug here, but now
that we're actively referencing the snapshot it's almost free to make
this code a bit more bulletproof.

We might want to consider back-patching something like this into older
branches, but it would be prudent to let it prove itself more in HEAD
beforehand.

Discussion: <87vazemeda.fsf@credativ.de>
2016-08-07 17:46:08 -04:00
Tom Lane 07a601eeda Avoid crashing in GetOldestSnapshot() if there are no known snapshots.
The sole caller expects NULL to be returned in such a case, so make
it so and document it.

Per reports from Andreas Seltenreich and Regina Obe.  This doesn't
really fix their problem, as now their RETURNING queries will say
"ERROR: no known snapshots", but in any case this function should
not dump core in a reasonably-foreseeable situation.

Report: <87vazemeda.fsf@credativ.de>
Report: <20160807051854.1427.32414@wrigleys.postgresql.org>
2016-08-07 14:36:02 -04:00
Tom Lane bcbecbce2f Don't propagate a null subtransaction snapshot up to parent transaction.
This oversight could cause logical decoding to fail to decode an outer
transaction containing changes, if a subtransaction had an XID but no
actual changes.  Per bug #14279 from Marko Tiikkaja.  Patch by Marko
based on analysis by Andrew Gierth.

Discussion: <20160804191757.1430.39011@wrigleys.postgresql.org>
2016-08-07 13:15:55 -04:00
Tom Lane 3676631c68 First-draft release notes for 9.5.4.
As usual, the release notes for other branches will be made by cutting
these down, but put them up for community review first.
2016-08-06 22:08:31 -04:00
Tom Lane e89526d4f3 In B-tree page deletion, clean up properly after page deletion failure.
In _bt_unlink_halfdead_page(), we might fail to find an immediate left
sibling of the target page, perhaps because of corruption of the page
sibling links.  The code intends to cope with this by just abandoning
the deletion attempt; but what actually happens is that it fails outright
due to releasing the same buffer lock twice.  (And error recovery masks
a second problem, which is possible leakage of a pin on another page.)
Seems to have been introduced by careless refactoring in commit efada2b8e.
Since there are multiple cases to consider, let's make releasing the buffer
lock in the failure case the responsibility of _bt_unlink_halfdead_page()
not its caller.

Also, avoid fetching the leaf page's left-link again after we've dropped
lock on the page.  This is probably harmless, but it's not exactly good
coding practice.

Per report from Kyotaro Horiguchi.  Back-patch to 9.4 where the faulty code
was introduced.

Discussion: <20160803.173116.111915228.horiguchi.kyotaro@lab.ntt.co.jp>
2016-08-06 14:28:37 -04:00
Tom Lane 69dc5ae408 Teach libpq to decode server version correctly from future servers.
Beginning with the next development cycle, PG servers will report two-part
not three-part version numbers.  Fix libpq so that it will compute the
correct numeric representation of such server versions for reporting by
PQserverVersion().  It's desirable to get this into the field and
back-patched ASAP, so that older clients are more likely to understand the
new server version numbering by the time any such servers are in the wild.

(The results with an old client would probably not be catastrophic anyway
for a released server; for example "10.1" would be interpreted as 100100
which would be wrong in detail but would not likely cause an old client to
misbehave badly.  But "10devel" or "10beta1" would result in sversion==0
which at best would result in disabling all use of modern features.)

Extracted from a patch by Peter Eisentraut; comments added by me

Patch: <802ec140-635d-ad86-5fdf-d3af0e260c22@2ndquadrant.com>
2016-08-05 18:58:12 -04:00
Tom Lane fc509cd824 Fix copy-and-pasteo in 81c766b3fd.
Report: <57A4E6DF.8070209@dunslane.net>
2016-08-05 16:21:38 -04:00
Tom Lane f10eab73df Make array_to_tsvector() sort and de-duplicate the given strings.
This is required for the result to be a legal tsvector value.
Noted while fooling with Andreas Seltenreich's ts_delete() crash.

Discussion: <87invhoj6e.fsf@credativ.de>
2016-08-05 16:09:06 -04:00
Tom Lane c50d192ce3 Fix ts_delete(tsvector, text[]) to cope with duplicate array entries.
Such cases either failed an Assert, or produced a corrupt tsvector in
non-Assert builds, as reported by Andreas Seltenreich.  The reason is
that tsvector_delete_by_indices() just assumed that its input array had
no duplicates.  Fix by explicitly de-duping.

In passing, improve some comments, and fix a number of tests for null
values to use ERRCODE_NULL_VALUE_NOT_ALLOWED not
ERRCODE_INVALID_PARAMETER_VALUE.

Discussion: <87invhoj6e.fsf@credativ.de>
2016-08-05 15:14:19 -04:00
Tom Lane 33fe7360af Re-pgindent tsvector_op.c.
Messed up by recent commits --- this is annoying me while trying to fix
some bugs here.
2016-08-05 15:14:19 -04:00
Bruce Momjian 5ebad9a580 docs: re-add spaces before units removed
This reverts the spaces before k/M/G/TB units removed for consistency in
commit ca0c37b56f.

Discussion: 20160802165116.GC32575@momjian.us
2016-08-05 14:36:17 -04:00
Tom Lane a629330b29 Update time zone data files to tzdata release 2016f.
DST law changes in Kemerovo and Novosibirsk.  Historical corrections for
Azerbaijan, Belarus, and Morocco.  Asia/Novokuznetsk and Asia/Novosibirsk
now use numeric time zone abbreviations instead of invented ones.  Zones
for Antarctic bases and other locations that have been uninhabited for
portions of the time span known to the tzdata database now report "-00"
rather than "zzz" as the zone abbreviation for those time spans.

Also, I decided to remove some of the timezone/data/ files that we don't
use.  At one time that subdirectory was a complete copy of what IANA
distributes in the tzdata tarballs, but that hasn't been true for a long
time.  There seems no good reason to keep shipping those specific files
but not others; they're just bloating our tarballs.
2016-08-05 12:59:29 -04:00
Robert Haas 81c766b3fd Change InitToastSnapshot to a macro.
tqual.h is included in some front-end compiles, and a static inline
breaks on buildfarm member castoroides.  Since the macro is never
referenced, it should dodge that problem, although this doesn't
seem like the cleanest way of hiding things from front-end compiles.

Report and review by Tom Lane; patch by me.
2016-08-05 11:58:03 -04:00
Andres Freund e7caacf733 Fix hard to hit race condition in heapam's tuple locking code.
As mentioned in its commit message, eca0f1db left open a race condition,
where a page could be marked all-visible, after the code checked
PageIsAllVisible() to pin the VM, but before the page is locked.  Plug
that hole.

Reviewed-By: Robert Haas, Andres Freund
Author: Amit Kapila
Discussion: CAEepm=3fWAbWryVW9swHyLTY4sXVf0xbLvXqOwUoDiNCx9mBjQ@mail.gmail.com
Backpatch: -
2016-08-04 20:07:16 -07:00
Bruce Momjian 4eb4b3f245 docs: mention rsync of temp and unlogged tables
This happens when using rsync to pg_upgrade slaves.

Reported-by: Jerry Sievers

Discussion: 20160726161946.GA3511@momjian.us
2016-08-04 18:55:36 -04:00
Tom Lane 8d498a5c8a Fix bogus coding in WaitForBackgroundWorkerShutdown().
Some conditions resulted in "return" directly out of a PG_TRY block,
which left the exception stack dangling, and to add insult to injury
failed to restore the state of set_latch_on_sigusr1.

This is a bug only in 9.5; in HEAD it was accidentally fixed by commit
db0f6cad4, which removed the surrounding PG_TRY block.  However, I (tgl)
chose to apply the patch to HEAD as well, because the old coding was
gratuitously different from WaitForBackgroundWorkerStartup(), and there
would indeed have been no bug if it were done like that to start with.

Dmitry Ivanov

Discussion: <1637882.WfYN5gPf1A@abook>
2016-08-04 16:06:14 -04:00
Peter Eisentraut 81568a971f doc: Move indexterms to avoid whitespace issue in man pages 2016-08-03 17:02:00 -04:00
Robert Haas 3e2f3c2e42 Prevent "snapshot too old" from trying to return pruned TOAST tuples.
Previously, we tested for MVCC snapshots to see whether they were too
old, but not TOAST snapshots, which can lead to complaints about missing
TOAST chunks if those chunks are subject to early pruning.  Ideally,
the threshold lsn and timestamp for a TOAST snapshot would be that of
the corresponding MVCC snapshot, but since we have no way of deciding
which MVCC snapshot was used to fetch the TOAST pointer, use the oldest
active or registered snapshot instead.

Reported by Andres Freund, who also sketched out what the fix should
look like.  Patch by me, reviewed by Amit Kapila.
2016-08-03 16:50:01 -04:00
Tom Lane a3c7a993d5 Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better.
Previously, if an INSERT with multiple rows of VALUES had indirection
(array subscripting or field selection) in its target-columns list, the
parser handled that by applying transformAssignedExpr() to each element
of each VALUES row independently.  This led to having ArrayRef assignment
nodes or FieldStore nodes in each row of the VALUES RTE.  That works for
simple cases, but in bug #14265 Nuri Boardman points out that it fails
if there are multiple assignments to elements/fields of the same target
column.  For such cases to work, rewriteTargetListIU() has to nest the
ArrayRefs or FieldStores together to produce a single expression to be
assigned to the column.  But it failed to find them in the top-level
targetlist and issued an error about "multiple assignments to same column".

We could possibly fix this by teaching the rewriter to apply
rewriteTargetListIU to each VALUES row separately, but that would be messy
(it would change the output rowtype of the VALUES RTE, for example) and
inefficient.  Instead, let's fix the parser so that the VALUES RTE outputs
are just the user-specified values, cast to the right type if necessary,
and then the ArrayRefs or FieldStores are applied in the top-level
targetlist to Vars representing the RTE's outputs.  This is the same
parsetree representation already used for similar cases with INSERT/SELECT
syntax, so it allows simplifications in ruleutils.c, which no longer needs
to treat INSERT-from-multiple-VALUES as its own special case.

This implementation works by applying transformAssignedExpr to the VALUES
entries as before, and then stripping off any ArrayRefs or FieldStores it
adds.  With lots of VALUES rows it would be noticeably more efficient to
not add those nodes in the first place.  But that's just an optimization
not a bug fix, and there doesn't seem to be any good way to do it without
significant refactoring.  (A non-invasive answer would be to apply
transformAssignedExpr + stripping to just the first VALUES row, and then
just forcibly cast remaining rows to the same data types exposed in the
first row.  But this way would lead to different, not-INSERT-specific
errors being reported in casting failure cases, so it doesn't seem very
nice.)  So leave that for later; this patch at least isn't making the
per-row parsing work worse, and it does make the finished parsetree
smaller, saving rewriter and planner work.

Catversion bump because stored rules containing such INSERTs would need
to change.  Because of that, no back-patch, even though this is a very
long-standing bug.

Report: <20160727005725.7438.26021@wrigleys.postgresql.org>
Discussion: <9578.1469645245@sss.pgh.pa.us>
2016-08-03 16:37:03 -04:00
Tom Lane ef1b5af823 Do not let PostmasterContext survive into background workers.
We don't want postmaster child processes to contain a copy of the
postmaster's PostmasterContext.  That would be a waste of memory at least,
and at worst a security issue, since there are copies of the semi-sensitive
pg_hba and pg_ident data in there.  All other child process types delete
the PostmasterContext after forking, but the original coding of the
background worker patch (commit da07a1e85) did not do so.  It appears
that the only reason for that was to avoid copying the bgworker's
MyBgworkerEntry out of that context; but the couple of additional
statements needed to do so are hardly good justification for it.  Hence,
copy that data and then clear the context as other child processes do.

Because this patch changes the memory context in which a bgworker function
gains control, back-patching it would be a bit risky, so we won't fix this
in back branches.  The "security" complaint is pretty thin anyway for
generic bgworkers; only with the introduction of parallel query is there
any question of running untrusted code in a bgworker process.

Discussion: <14111.1470082717@sss.pgh.pa.us>
2016-08-03 14:48:13 -04:00
Peter Eisentraut 6a9e09c49e Add missing casts in information schema
From: Clément Prévost <prevostclement@gmail.com>
2016-08-03 14:41:01 -04:00
Peter Eisentraut 2b8fd4fa67 doc: Remove documentation of nonexistent information schema columns
These were probably copied in by accident.

From: Clément Prévost <prevostclement@gmail.com>
2016-08-03 13:50:13 -04:00
Alvaro Herrera b26f7fa6ae Fix assorted problems in recovery tests
In test 001_stream_rep we're using pg_stat_replication.write_location to
determine catch-up status, but we care about xlog having been applied
not just received, so change that to apply_location.

In test 003_recovery_targets, we query the database for a recovery
target specification and later for the xlog position supposedly
corresponding to that recovery specification.  If for whatever reason
more WAL is written between the two queries, the recovery specification
is earlier than the xlog position used by the query in the test harness,
so we wait forever, leading to test failures.  Deal with this by using a
single query to extract both items.  In 2a0f89cd71 we tried to deal
with it by giving them more tests to run, but in hindsight that was
obviously doomed to failure (no revert of that, though).

Per hamster buildfarm failures.

Author: Michaël Paquier
2016-08-03 13:21:23 -04:00
Peter Eisentraut 69bdfc4090 doc: Change recommendation to put NOTIFY into a rule
Suggest a statement trigger instead.
2016-08-03 12:29:15 -04:00
Kevin Grittner c93d8737be Add OldSnapshotTimeMapLock to wait_event table in docs.
Ashutosh Sharma with minor fixes by me.
2016-08-03 09:58:50 -05:00