Commit Graph

39218 Commits

Author SHA1 Message Date
Tom Lane
80be41979e Fix possible internal overflow in numeric division.
div_var_fast() postpones propagating carries in the same way as mul_var(),
so it has the same corner-case overflow risk we fixed in 246693e5ae,
namely that the size of the carries has to be accounted for when setting
the threshold for executing a carry propagation step.  We've not devised
a test case illustrating the brokenness, but the required fix seems clear
enough.  Like the previous fix, back-patch to all active branches.

Dean Rasheed
2015-11-17 15:46:54 -05:00
Tom Lane
331828b754 Back-patch fixes to make TAP tests work on Windows.
This back-ports commit 13d856e177 and assorted followon patches
into 9.4 and 9.5.  9.5 and HEAD are now substantially identical in all
the files touched by this commit, except that 010_pg_basebackup.pl has
a few more tests related to the new --slot option.  9.4 has many fewer
TAP tests, but the test infrastructure files are substantially the same,
with the exception that 9.4 lacks the single-tmp-install infrastructure
introduced in 9.5 (commit dcae5facca).

The primary motivation for this patch is to ensure that TAP test case
fixes can be back-patched without hazards of the kind seen in commits
34557f544/06dd4b44f.  In principle it should also make the world safe
for running the TAP tests in the buildfarm in these branches; although
we might want to think about back-porting dcae5facca to 9.4 if
we're going to do that for real, because the TAP tests are quite disk
space hungry without it.

Michael Paquier did the back-porting work; original patches were by
him and assorted other people.
2015-11-17 14:10:24 -05:00
Peter Eisentraut
a408bd58a6 Message style fix
from Euler Taveira
2015-11-17 06:53:38 -05:00
Peter Eisentraut
b6a6340b17 Improve message 2015-11-16 22:26:47 -05:00
Peter Eisentraut
689cabf402 Message improvements 2015-11-16 21:16:42 -05:00
Peter Eisentraut
75c8af902e doc: Fix commas and improve spacing 2015-11-16 19:01:05 -05:00
Tom Lane
34d4f49bb9 Speed up ruleutils' name de-duplication code, and fix overlength-name case.
Since commit 11e131854f, ruleutils.c has
attempted to ensure that each RTE in a query or plan tree has a unique
alias name.  However, the code that was added for this could be quite slow,
even as bad as O(N^3) if N identical RTE names must be replaced, as noted
by Jeff Janes.  Improve matters by building a transient hash table within
set_rtable_names.  The hash table in itself reduces the cost of detecting a
duplicate from O(N) to O(1), and we can save another factor of N by storing
the number of de-duplicated names already created for each entry, so that
we don't have to re-try names already created.  This way is probably a bit
slower overall for small range tables, but almost by definition, such cases
should not be a performance problem.

In principle the same problem applies to the column-name-de-duplication
code; but in practice that seems to be less of a problem, first because
N is limited since we don't support extremely wide tables, and second
because duplicate column names within an RTE are fairly rare, so that in
practice the cost is more like O(N^2) not O(N^3).  It would be very much
messier to fix the column-name code, so for now I've left that alone.

An independent problem in the same area was that the de-duplication code
paid no attention to the identifier length limit, and would happily produce
identifiers that were longer than NAMEDATALEN and wouldn't be unique after
truncation to NAMEDATALEN.  This could result in dump/reload failures, or
perhaps even views that silently behaved differently than before.  We can
fix that by shortening the base name as needed.  Fix it for both the
relation and column name cases.

In passing, check for interrupts in set_rtable_names, just in case it's
still slow enough to be an issue.

Back-patch to 9.3 where this code was introduced.
2015-11-16 13:45:17 -05:00
Tom Lane
0489a048d3 Fix ruleutils.c's dumping of whole-row Vars in ROW() and VALUES() contexts.
Normally ruleutils prints a whole-row Var as "foo.*".  We already knew that
that doesn't work at top level of a SELECT list, because the parser would
treat the "*" as a directive to expand the reference into separate columns,
not a whole-row Var.  However, Joshua Yanovski points out in bug #13776
that the same thing happens at top level of a ROW() construct; and some
nosing around in the parser shows that the same is true in VALUES().
Hence, apply the same workaround already devised for the SELECT-list case,
namely to add a forced cast to the appropriate rowtype in these cases.
(The alternative of just printing "foo" was rejected because it is
difficult to avoid ambiguity against plain columns named "foo".)

Back-patch to all supported branches.
2015-11-15 14:41:09 -05:00
Bruce Momjian
fae58d5bed pg_upgrade: properly detect file copy failure on Windows
Previously, file copy failures were ignored on Windows due to an
incorrect return value check.

Report by Manu Joye

Backpatch through 9.1
2015-11-14 11:47:11 -05:00
Stephen Frost
d324c72261 Correct sepgsql docs with regard to RLS
The sepgsql docs included a comment that PG doesn't support RLS.  That
is only true for versions prior to 9.5.

Update the docs for 9.5 and master to say that PG supports RLS but that
sepgsql does not yet.

Pointed out by Heikki.

Back-patch to 9.5
2015-11-13 11:06:42 -05:00
Alvaro Herrera
5094da99b9 vacuumdb: don't prompt for passwords over and over
Having the script prompt for passwords over and over was a preexisting
problem when it processed multiple databases or when it processed
multiple analyze stages, but the parallel mode introduced in commit
a179232047 made it worse.

Fix the annoyance by keeping a copy of the password used by the first
connection that requires one.  Since users can (currently) only have a
single password, there's no need for more complex arrangements (such as
remembering one password per database).

Per bug #13741 reported by Eric Brown.  Patch authored and
cross-reviewed by Haribabu Kommi and Michael Paquier, slightly tweaked
by Álvaro Herrera.

Discussion: http://www.postgresql.org/message-id/20151027193919.931.54948@wrigleys.postgresql.org
Backpatch to 9.5, where parallel vacuumdb was introduced.
2015-11-12 18:05:23 -03:00
Tom Lane
747854f010 Fix unwanted flushing of libpq's input buffer when socket EOF is seen.
In commit 210eb9b743 I centralized libpq's logic for closing down
the backend communication socket, and made the new pqDropConnection
routine always reset the I/O buffers to empty.  Many of the call sites
previously had not had such code, and while that amounted to an oversight
in some cases, there was one place where it was intentional and necessary
*not* to flush the input buffer: pqReadData should never cause that to
happen, since we probably still want to process whatever data we read.

This is the true cause of the problem Robert was attempting to fix in
c3e7c24a1d, namely that libpq no longer reported the backend's final
ERROR message before reporting "server closed the connection unexpectedly".
But that only accidentally fixed it, by invoking parseInput before the
input buffer got flushed; and very likely there are timing scenarios
where we'd still lose the message before processing it.

To fix, pass a flag to pqDropConnection to tell it whether to flush the
input buffer or not.  On review I think flushing is actually correct for
every other call site.

Back-patch to 9.3 where the problem was introduced.  In HEAD, also improve
the comments added by c3e7c24a1d.
2015-11-12 13:03:52 -05:00
Tom Lane
a8c209fce1 Do a round of copy-editing on the 9.5 release notes.
Also fill in the previously empty "major enhancements" list.  YMMV as to
which items should make the cut, but it's past time we had something more
than a placeholder here.

(I meant to get this done before beta2 was wrapped, but got distracted by
PDF build problems.  Better late than never.)
2015-11-11 19:19:14 -05:00
Tom Lane
bcb8f96e67 Improve documentation around autovacuum-related storage parameters.
These were discussed in three different sections of the manual, which
unsurprisingly had diverged over time; and the descriptions of individual
variables lacked stylistic consistency even within each section (and
frequently weren't in very good English anyway).  Clean up the mess, and
remove some of the redundant information in hopes that future additions
will be less likely to re-introduce inconsistency.  For instance I see
no need for maintenance.sgml to include its very own list of all the
autovacuum storage parameters, especially since that list was already
incomplete.
2015-11-11 17:13:38 -05:00
Tom Lane
0819778c43 Docs: fix misleading example.
Commit 8457d0beca introduced an example which, while not incorrect,
failed to exhibit the behavior it meant to describe, as a result of omitting
an E'' prefix that needed to be there.  Noticed and fixed by Peter Geoghegan.

I (tgl) failed to resist the temptation to wordsmith nearby text a bit
while at it.
2015-11-10 22:11:39 -05:00
Tom Lane
8d20eaa62b Improve our workaround for 'TeX capacity exceeded' in building PDF files.
In commit a5ec86a7c7 I wrote a quick hack
that reduced the number of TeX string pool entries created while converting
our documentation to PDF form.  That held the fort for awhile, but as of
HEAD we're back up against the same limitation.  It turns out that the
original coding of \FlowObjectSetup actually results in *three* string pool
entries being generated for every "flow object" (that is, potential
cross-reference target) in the documentation, and my previous hack only got
rid of one of them.  With a little more care, we can reduce the string
count to one per flow object plus one per actually-cross-referenced flow
object (about 115000 + 5000 as of current HEAD); that should work until
the documentation volume roughly doubles from where it is today.

As a not-incidental side benefit, this change also causes pdfjadetex to
stop emitting unreferenced hyperlink anchors (bookmarks) into the PDF file.
It had been making one willy-nilly for every flow object; now it's just one
per actually-cross-referenced object.  This results in close to a 2X
savings in PDF file size.  We will still want to run the output through
"jpdftweak" to get it to be compressed; but we no longer need removal of
unreferenced bookmarks, so we might be able to find a quicker tool for
that step.

Although the failure only affects HEAD and US-format output at the moment,
9.5 cannot be more than a few pages short of failing likewise, so it
will inevitably fail after a few rounds of minor-version release notes.
I don't have a lot of faith that we'll never hit the limit in the older
branches; and anyway it would be nice to get rid of jpdftweak across the
board.  Therefore, back-patch to all supported branches.
2015-11-10 16:00:13 -05:00
Robert Haas
eb66ee639e Stamp 9.5beta2. 2015-11-09 14:54:00 -05:00
Peter Eisentraut
289da0a7a5 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: cd263526676705b4a8a3a708c9842461c4a2bcc3
2015-11-09 10:21:11 -05:00
Andres Freund
90e074baec Add paragraph about ON CONFLICT interaction with partitioning.
Author: Peter Geoghegan and Andres Freund
Discussion: CAM3SWZScpWzQ-7EJC77vwqzZ1GO8GNmURQ1QqDQ3wRn7AbW1Cg@mail.gmail.com,
    CAHGQGwFUCWwSU7dtc2aRdRk73ztyr_jY5cPOyts+K8xKJ92X4Q@mail.gmail.com
Backpatch: 9.5, where UPSERT was introduced
2015-11-09 06:58:26 +01:00
Andres Freund
04c0b63365 Set replication origin when decoding commit records.
By accident the replication origin was not set properly in
DecodeCommit(). That's bad because the origin is passed to the output
plugins origin filter, and accessible from the output plugin via
ReorderBufferTXN->origin_id.  Accessing the origin of individual changes
worked before the fix, which is why this wasn't notices earlier.

Reported-By: Craig Ringer
Author: Craig Ringer
Discussion: CAMsr+YFhBJLp=qfSz3-J+0P1zLkE8zNXM2otycn20QRMx380gw@mail.gmail.com
Backpatch: 9.5, where replication origins where introduced
2015-11-09 00:03:35 +01:00
Noah Misch
40c28678aa Fix 9.5 version of previous commit to match its log message. 2015-11-08 17:40:19 -05:00
Noah Misch
bdb42bac3c Don't connect() to a wildcard address in test_postmaster_connection().
At least OpenBSD, NetBSD, and Windows don't support it.  This repairs
pg_ctl for listen_addresses='0.0.0.0' and listen_addresses='::'.  Since
pg_ctl prefers to test a Unix-domain socket, Windows users are most
likely to need this change.  Back-patch to 9.1 (all supported versions).
This could change pg_ctl interaction with loopback-interface firewall
rules.  Therefore, in 9.4 and earlier (released branches), activate the
change only on known-affected platforms.

Reported (bug #13611) and designed by Kondo Yuta.
2015-11-08 17:29:47 -05:00
Tom Lane
5daafafe74 Update 9.5 release notes through today. 2015-11-07 17:09:04 -05:00
Tom Lane
ab994cc00e Rename PQsslAttributes() to PQsslAttributeNames(), and const-ify fully.
Per discussion, the original name was a bit misleading, and
PQsslAttributeNames() seems more apropos.  It's not quite too late to
change this in 9.5, so let's change it while we can.

Also, make sure that the pointer array is const, not only the pointed-to
strings.

Minor documentation wordsmithing while at it.

Lars Kanis, slight adjustments by me
2015-11-07 16:13:49 -05:00
Tom Lane
44fc251536 Fix enforcement of restrictions inside regexp lookaround constraints.
Lookahead and lookbehind constraints aren't allowed to contain backrefs,
and parentheses within them are always considered non-capturing.  Or so
says the manual.  But the regexp parser forgot about these rules once
inside a parenthesized subexpression, so that constructs like (\w)(?=(\1))
were accepted (but then not correctly executed --- a case like this acted
like (\w)(?=\w), without any enforcement that the two \w's match the same
text).  And in (?=((foo))) the innermost parentheses would be counted as
capturing parentheses, though no text would ever be captured for them.

To fix, properly pass down the "type" argument to the recursive invocation
of parse().

Back-patch to all supported branches; it was agreed that silent
misexecution of such patterns is worse than throwing an error, even though
new errors in minor releases are generally not desirable.
2015-11-07 12:43:24 -05:00
Stephen Frost
695012a0d5 Set include_realm=1 default in parse_hba_line
With include_realm=1 being set down in parse_hba_auth_opt, if multiple
options are passed on the pg_hba line, such as:

host all     all    0.0.0.0/0    gss include_realm=0 krb_realm=XYZ.COM

We would mistakenly reset include_realm back to 1.  Instead, we need to
set include_realm=1 up in parse_hba_line, prior to parsing any of the
additional options.

Discovered by Jeff McCormick during testing.

Bug introduced by 9a08841.

Back-patch to 9.5
2015-11-06 11:18:33 -05:00
Tom Lane
4d867458fc Fix erroneous hash calculations in gin_extract_jsonb_path().
The jsonb_path_ops code calculated hash values inconsistently in some cases
involving nested arrays and objects.  This would result in queries possibly
not finding entries that they should find, when using a jsonb_path_ops GIN
index for the search.  The problem cases involve JSONB values that contain
both scalars and sub-objects at the same nesting level, for example an
array containing both scalars and sub-arrays.  To fix, reset the current
stack->hash after processing each value or sub-object, not before; and
don't try to be cute about the outermost level's initial hash.

Correcting this means that existing jsonb_path_ops indexes may now be
inconsistent with the new hash calculation code.  The symptom is the same
--- searches not finding entries they should find --- but the specific
rows affected are likely to be different.  Users will need to REINDEX
jsonb_path_ops indexes to make sure that all searches work as expected.

Per bug #13756 from Daniel Cheng.  Back-patch to 9.4 where the faulty
logic was introduced.
2015-11-05 18:15:48 -05:00
Robert Haas
c98605cc47 Pass extra data to bgworkers, and use this to fix parallel contexts.
Up until now, the total amount of data that could be passed to a
background worker at startup was one datum, which can be a small as
4 bytes on some systems.  That's enough to pass a dsm_handle or an
array index, but not much else.  Add a bgw_extra flag to the
BackgroundWorker struct, allowing up to 128 bytes to be passed to
a new worker on any platform.

Use this to fix a problem I recently discovered with the parallel
context machinery added in 9.5: the master assigns each worker an
array index, and each worker subsequently assigns itself an array
index, and there's nothing to guarantee that the two sets of indexes
match, leading to chaos.

Normally, I would not back-patch the change to add bgw_extra, since it
is basically a feature addition.  However, since 9.5 is still in beta
and there seems to be no other sensible way to repair the broken
parallel context machinery, back-patch to 9.5.  Existing background
worker code can ignore the bgw_extra field without a problem, but
might need to be recompiled since the structure size has changed.

Report and patch by me.  Review by Amit Kapila.
2015-11-05 12:21:00 -05:00
Robert Haas
1d97b25501 Improve comments about abbreviation abort.
Peter Geoghegan
2015-11-03 14:14:15 -05:00
Tom Lane
fdae4a93e9 Remove obsolete advice about doubling backslashes in regex escapes.
Standard-conforming literals have been the default for long enough that
it no longer seems necessary to go out of our way to tell people to write
regex escapes illegibly.
2015-11-03 11:58:03 -05:00
Tom Lane
f4057cdffc Code + docs review for unicode linestyle patch.
Fix some brain fade in commit a2dabf0e1d: erroneous variable names
in docs, rearrangements that made sentences less clear not more so,
undocumented and poorly-chosen-anyway API behaviors of subroutines,
bad grammar in error messages, copy-and-paste faults.

Albe Laurenz and Tom Lane
2015-11-03 11:49:21 -05:00
Robert Haas
fd5ce6b89b shm_mq: Third attempt at fixing nowait behavior in shm_mq_receive.
Commit a1480ec1d3 purported to fix the
problems with commit b2ccb5f4e6, but it
didn't completely fix them.  The problem is that the checks were
performed in the wrong order, leading to a race condition.  If the
sender attached, sent a message, and detached after the receiver
called shm_mq_get_sender and before the receiver called
shm_mq_counterparty_gone, we'd incorrectly return SHM_MQ_DETACHED
before all messages were read.  Repair by reversing the order of
operations, and add a long comment explaining why this new logic is
(hopefully) correct.
2015-11-03 09:18:26 -05:00
Kevin Grittner
67d4738d93 Add RMV to list of commands taking AE lock.
Backpatch to 9.3, where it was initially omitted.

Craig Ringer, with minor adjustment by Kevin Grittner
2015-11-02 06:26:28 -06:00
Kevin Grittner
50ca917d91 Fix serialization anomalies due to race conditions on INSERT.
On insert the CheckForSerializableConflictIn() test was performed
before the page(s) which were going to be modified had been locked
(with an exclusive buffer content lock).  If another process
acquired a relation SIReadLock on the heap and scanned to a page on
which an insert was going to occur before the page was so locked,
a rw-conflict would be missed, which could allow a serialization
anomaly to be missed.  The window between the check and the page
lock was small, so the bug was generally not noticed unless there
was high concurrency with multiple processes inserting into the
same table.

This was reported by Peter Bailis as bug #11732, by Sean Chittenden
as bug #13667, and by others.

The race condition was eliminated in heap_insert() by moving the
check down below the acquisition of the buffer lock, which had been
the very next statement.  Because of the loop locking and unlocking
multiple buffers in heap_multi_insert() a check was added after all
inserts were completed.  The check before the start of the inserts
was left because it might avoid a large amount of work to detect a
serialization anomaly before performing the all of the inserts and
the related WAL logging.

While investigating this bug, other SSI bugs which were even harder
to hit in practice were noticed and fixed, an unnecessary check
(covered by another check, so redundant) was removed from
heap_update(), and comments were improved.

Back-patch to all supported branches.

Kevin Grittner and Thomas Munro
2015-10-31 14:42:46 -05:00
Robert Haas
21e634e4b2 doc: security_barrier option is a Boolean, not a string.
Mistake introduced by commit 5bd91e3a83.

Hari Babu
2015-10-30 12:22:18 +01:00
Robert Haas
7852f73bdf Fix typo in bgworker.c 2015-10-30 10:38:27 +01:00
Tom Lane
9a1a22980d Docs: add example clarifying use of nested JSON containment.
Show how this can be used in practice to make queries simpler and more
flexible.  Also, draw an explicit contrast to the existence operator,
which doesn't work that way.

Peter Geoghegan and Tom Lane
2015-10-29 18:54:44 -04:00
Peter Eisentraut
0bc3071796 Message style improvements
Message style, plurals, quoting, spelling, consistency with similar
messages
2015-10-28 20:23:53 -04:00
Robert Haas
d17d5125f6 Add missing serial comma, for consistency.
Amit Langote, per Etsuro Fujita
2015-10-28 12:24:51 +01:00
Robert Haas
e53e2a1968 Fix incorrect message in ATWrongRelkindError.
Mistake introduced by commit 3bf3ab8c56.

Etsuro Fujita
2015-10-28 11:49:55 +01:00
Alvaro Herrera
c56949168c Fix secondary expected output for commit_ts test
Per red wall in buildfarm
2015-10-27 23:02:04 -03:00
Alvaro Herrera
3e9e033539 Document BRIN's inclusion opclass framework
Backpatch to 9.5 -- this should have been part of b0b7be6133, but we
didn't have 38b03caebc either at the time.

Author: Emre Hasegeli
Revised by: Ian Barwick
Discussion:
 http://www.postgresql.org/message-id/CAE2gYzyB39Q9up_-TO6FKhH44pcAM1x6n_Cuj15qKoLoFihUVg@mail.gmail.com
 http://www.postgresql.org/message-id/562DA711.3020305@2ndquadrant.com
2015-10-27 19:03:15 -03:00
Alvaro Herrera
cf42abcc65 Fix BRIN free space computations
A bug in the original free space computation made it possible to
return a page which wasn't actually able to fit the item.  Since the
insertion code isn't prepared to deal with PageAddItem failing, a PANIC
resulted ("failed to add BRIN tuple [to new page]").  Add a macro to
encapsulate the correct computation, and use it in
brin_getinsertbuffer's callers before calling that routine, to raise an
early error.

I became aware of the possiblity of a problem in this area while working
on ccc4c07499.  There's no archived discussion about it, but it's
easy to reproduce a problem in the unpatched code with something like

CREATE TABLE t (a text);
CREATE INDEX ti ON t USING brin (a) WITH (pages_per_range=1);

for length in `seq 8000 8196`
do
	psql -f - <<EOF
TRUNCATE TABLE t;
INSERT INTO t VALUES ('z'), (repeat('a', $length));
EOF
done

Backpatch to 9.5, where BRIN was introduced.
2015-10-27 18:17:55 -03:00
Alvaro Herrera
68cc162e45 Cleanup commit timestamp module activaction, again
Further tweak commit_ts.c so that on a standby the state is completely
consistent with what that in the master, rather than behaving
differently in the cases that the settings differ.  Now in standby and
master the module should always be active or inactive in lockstep.

Author: Petr Jelínek, with some further tweaks by Álvaro Herrera.

Backpatch to 9.5, where commit timestamps were introduced.

Discussion: http://www.postgresql.org/message-id/5622BF9D.2010409@2ndquadrant.com
2015-10-27 15:06:50 -03:00
Alvaro Herrera
80ae841f2f Measure string lengths only once
Bernd Helmle complained that CreateReplicationSlot() was assigning the
same value to the same variable twice, so we could remove one of them.
Code inspection reveals that we can actually remove both assignments:
according to the author the assignment was there for beauty of the
strlen line only, and another possible fix to that is to put the strlen
in its own line, so do that.

To be consistent within the file, refactor all duplicated strlen()
calls, which is what we do elsewhere in the backend anyway.  In
basebackup.c, snprintf already returns the right length; no need for
strlen afterwards.

Backpatch to 9.4, where replication slots were introduced, to keep code
identical.  Some of this is older, but the patch doesn't apply cleanly
and it's only of cosmetic value anyway.

Discussion: http://www.postgresql.org/message-id/BE2FD71DEA35A2287EA5F018@eje.credativ.lan
2015-10-27 13:20:40 -03:00
Robert Haas
44390e30f8 shm_mq: Repair breakage from previous commit.
If the counterparty writes some data into the queue and then detaches,
it's wrong to return SHM_MQ_DETACHED right away.  If we do that, we
fail to read whatever was written.
2015-10-22 22:03:58 -04:00
Robert Haas
17b07afae3 Add two missing cases to ATWrongRelkindError.
This way, we produce a better error message if someone tries to do
something like ALTER INDEX .. ALTER COLUMN .. SET STORAGE.

Amit Langote
2015-10-22 17:04:18 -04:00
Robert Haas
ac9a01615c shm_mq: Fix failure to notice a dead counterparty when nowait is used.
The shm_mq mechanism was intended to optionally notice when the process
on the other end of the queue fails to attach to the queue.  It does
this by allowing the user to pass a BackgroundWorkerHandle; if the
background worker in question is launched and dies without attaching
to the queue, then we know it never will.  This logic works OK in
blocking mode, but when called with nowait = true we fail to notice
that this has happened due to an asymmetry in the logic.  Repair.

Reported off-list by Rushabh Lathia.  Patch by me.
2015-10-22 16:40:00 -04:00
Peter Eisentraut
85e30f57cb doc: Add advice on updating checkpoint_segments to max_wal_size
with suggestion from Michael Paquier
2015-10-22 14:02:10 -04:00
Tom Lane
5fb20a5ba6 Fix incorrect translation of minus-infinity datetimes for json/jsonb.
Commit bda76c1c8c caused both plus and
minus infinity to be rendered as "infinity", which is not only wrong
but inconsistent with the pre-9.4 behavior of to_json().  Fix that by
duplicating the coding in date_out/timestamp_out/timestamptz_out more
closely.  Per bug #13687 from Stepan Perlov.  Back-patch to 9.4, like
the previous commit.

In passing, also re-pgindent json.c, since it had gotten a bit messed up by
recent patches (and I was already annoyed by indentation-related problems
in back-patching this fix ...)
2015-10-20 11:07:05 -07:00