Commit Graph

43376 Commits

Author SHA1 Message Date
Peter Eisentraut ba26f5cf76 Remove our own definition of NULL
Surely everyone has that by now.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Ryan Murphy <ryanfmurphy@gmail.com>
2017-09-05 14:52:54 -04:00
Peter Eisentraut 5a739e7b2c fuzzystrmatch: Remove dead code
Remnants left behind by a323ede280

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Ryan Murphy <ryanfmurphy@gmail.com>
2017-09-05 14:52:54 -04:00
Peter Eisentraut 90627cf98a Support retaining data dirs on successful TAP tests
This moves the data directories from using temporary directories with
randomness in the directory name to a static name, to make it easier to
debug.  The data directory will be retained if tests fail or the test
code dies/exits with failure, and is automatically removed on the next
make check.

If the environment variable PG_TEST_NOCLEAN is defined, the data
directories will be retained regardless of test or exit status.

Author: Daniel Gustafsson <daniel@yesql.se>
2017-09-05 12:24:06 -04:00
Tom Lane 5e8304fdce In psql, use PSQL_PAGER in preference to PAGER, if it's set.
This allows the user's environment to set up a psql-specific choice
of pager, in much the same way that we provide PSQL_EDITOR to allow
a psql-specific override of the more widely known EDITOR variable.

Pavel Stehule, reviewed by Thomas Munro

Discussion: https://postgr.es/m/CAFj8pRD3RRk9S1eRbnGm_T6brc3Ss5mohraNzTSJquzx+pmtKA@mail.gmail.com
2017-09-05 12:02:13 -04:00
Alvaro Herrera ebd346caf4 Correct base backup throttling
Throttling for sending a base backup in walsender is broken for the case
where there is a lot of WAL traffic, because the latch used to put the
walsender to sleep is also signalled by regular WAL traffic (and each
signal causes an additional batch of data to be sent); the net effect is
that there is no or little actual throttling.  This is undesirable, so
rewrite the sleep into a loop to achieve the desired effeect.

Author: Jeff Janes, small tweaks by me
Reviewed-by: Antonin Houska
Discussion: https://postgr.es/m/CAMkU=1xH6mde-yL-Eo1TKBGNd0PB1-TMxvrNvqcAkN-qr2E9mw@mail.gmail.com
2017-09-05 17:27:30 +02:00
Tom Lane 9ae9d8c154 Add psql variables showing server version and psql version.
We already had a psql variable VERSION that shows the verbose form of
psql's own version.  Add VERSION_NAME to show the short form (e.g.,
"11devel") and VERSION_NUM to show the numeric form (e.g., 110000).
Also add SERVER_VERSION_NAME and SERVER_VERSION_NUM to show the short and
numeric forms of the server's version.  (We'd probably add SERVER_VERSION
with the verbose string if it were readily available; but adding another
network round trip to get it seems too expensive.)

The numeric forms, in particular, are expected to be useful for scripting
purposes, now that psql can do conditional tests.

Fabien Coelho, reviewed by Pavel Stehule

Discussion: https://postgr.es/m/alpine.DEB.2.20.1704020917220.4632@lancre
2017-09-05 10:51:36 -04:00
Tom Lane 3955c8c4ed Reformat psql's --help=variables output.
The previous format with variable names and descriptions in separate
columns was extremely constraining about the length of the descriptions.
We'd dealt with that in several inconsistent ways over the years,
including letting the lines run over 80 characters, breaking descriptions
into multiple lines, or shoving the description onto a separate line.
But it's been a long time since the output could realistically fit onto
a single screen vertically, so let's just rely even more heavily on the
pager to deal with the vertical distance, and split each entry into two
(or more) lines, in the format

  variable-name
    variable description goes here

Each variable name + description remains a single translatable string,
in hopes of reducing translator confusion; we're just changing the
embedded whitespace.

I failed to resist the temptation to copy-edit one or two of the
descriptions while at it.

Discussion: https://postgr.es/m/2947.1504542679@sss.pgh.pa.us
2017-09-05 10:17:10 -04:00
Tom Lane 0b707d6ea7 Be more careful about newline-chomping in pgbench.
process_backslash_command would drop the last character of the input
command on the assumption that it was a newline.  Given a non newline
terminated input file, this could result in dropping the last character
of the command.  Fix that by doing an actual test that we're removing
a newline.

While at it, allow for Windows newlines (\r\n), and suppress multiple
newlines if any.  I do not think either of those cases really occur,
since (a) we read script files in text mode and (b) the lexer stops
when it hits a newline.  But it's cheap enough and it provides a
stronger guarantee about what the result string looks like.

This is just cosmetic, I think, since the possibly-overly-chomped
line was only used for display not for further processing.  So
it doesn't seem necessary to back-patch.

Fabien Coelho, reviewed by Nikolay Shaplov, whacked around a bit by me

Discussion: https://postgr.es/m/alpine.DEB.2.20.1704171422500.4025@lancre
2017-09-04 17:25:31 -04:00
Tom Lane c23bb6badf Fix some subtle problems in pgbench transaction stats counting.
With --latency-limit, transactions might get skipped even beyond the
transaction count limit specified by -t, throwing off the expected
number of transactions and thus the denominator for later stats.
Be sure to stop skipping transactions once -t is reached.

Also, include skipped transactions in the "cnt" fields; this requires
discounting them again in a couple of places, but most places are
better off with this definition.  In particular this is needed to
get correct overall stats for the combination of -R/-L/-t options.
Merge some more processing into processXactStats() to simplify this.

In passing, add a check that --progress-timestamp is specified only
when --progress is.

We might consider back-patching this, but given that it only matters
for a combination of options, and given the lack of field complaints,
consensus seems to be not to bother.

Fabien Coelho, reviewed by Nikolay Shaplov

Discussion: https://postgr.es/m/alpine.DEB.2.20.1704171422500.4025@lancre
2017-09-04 16:24:08 -04:00
Tom Lane 9d36a38660 Adjust pgbench to allow non-ASCII characters in variable names.
This puts it in sync with psql's notion of what is a valid variable name.
Like psql, we document that "non-Latin letters" are allowed, but actually
any non-ASCII character is accepted.

Fabien Coelho

Discussion: https://postgr.es/m/20170405.094548.1184280384967203518.t-ishii@sraoss.co.jp
2017-09-04 13:45:20 -04:00
Alvaro Herrera 863d75439e Fix translatable string
Discussion: https://postgr.es/m/20170828130545.sdajqlpr37hmmd6a@alvherre.pgsql
2017-09-04 11:08:52 +02:00
Tom Lane 4faa1dc2eb Suppress compiler warnings in dshash.c.
Some compilers complain, not unreasonably, about left-shifting an
int32 "1" and then assigning the result to an int64.  In practice
I sure hope that this data structure never gets large enough that
an overflow would actually occur; but let's cast the constant to
the right type to avoid the hazard.

In passing, fix a typo in dshash.h.

Amit Kapila, adjusted as per comment from Thomas Munro.

Discussion: https://postgr.es/m/CAA4eK1+5vfVMYtjK_NX8O3-42yM3o80qdqWnQzGquPrbq6mb+A@mail.gmail.com
2017-09-03 11:12:29 -04:00
Tom Lane e451901804 Fix macro-redefinition warning on MSVC.
In commit 9d6b160d7, I tweaked pg_config.h.win32 to use
"#define HAVE_LONG_LONG_INT_64 1" rather than defining it as empty,
for consistency with what happens in an autoconf'd build.
But Solution.pm injects another definition of that macro into
ecpg_config.h, leading to justifiable (though harmless) compiler whining.
Make that one consistent too.  Back-patch, like the previous patch.

Discussion: https://postgr.es/m/CAEepm=1dWsXROuSbRg8PbKLh0S=8Ou-V8sr05DxmJOF5chBxqQ@mail.gmail.com
2017-09-03 11:01:08 -04:00
Peter Eisentraut afc58affb6 doc: Fix typos and other minor issues
Author: Alexander Lakhin <exclusion@gmail.com>
2017-09-01 23:34:12 -04:00
Tom Lane 51daa7bdb3 Improve division of labor between execParallel.c and nodeGather[Merge].c.
Move the responsibility for creating/destroying TupleQueueReaders into
execParallel.c, to avoid duplicative coding in nodeGather.c and
nodeGatherMerge.c.  Also, instead of having DestroyTupleQueueReader do
shm_mq_detach, do it in the caller (which is now only ExecParallelFinish).
This means execParallel.c does both the attaching and detaching of the
tuple-queue-reader shm_mqs, which seems less weird than the previous
arrangement.

These changes also eliminate a vestigial memory leak (of the pei->tqueue
array).  It's now demonstrable that rescans of Gather or GatherMerge don't
leak memory.

Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
2017-09-01 17:39:01 -04:00
Peter Eisentraut c039ba0716 Add memory info to getrusage output
Add the maxrss field to the getrusage output (log_*_stats).  This was
previously omitted because of portability concerns, but we feel this
might not be a concern anymore.

based on patch by Justin Pryzby <pryzby@telsasoft.com>
2017-09-01 15:36:33 -04:00
Robert Haas 0cb8b7531d Tighten up some code in RelationBuildPartitionDesc.
This probably doesn't save anything meaningful in terms of
performance, but making the code simpler is a good idea anyway.

Code by Beena Emerson, extracted from a larger patch by Jeevan
Ladhe, slightly adjusted by me.

Discussion: http://postgr.es/m/CAOgcT0ONgwajdtkoq+AuYkdTPY9cLWWLjxt_k4SXue3eieAr+g@mail.gmail.com
2017-09-01 15:16:44 -04:00
Tom Lane 9d6b160d7d Make [U]INT64CONST safe for use in #if conditions.
Instead of using a cast to force the constant to be the right width,
assume we can plaster on an L, UL, LL, or ULL suffix as appropriate.
The old approach to this is very hoary, dating from before we were
willing to require compilers to have working int64 types.

This fix makes the PG_INT64_MIN, PG_INT64_MAX, and PG_UINT64_MAX
constants safe to use in preprocessor conditions, where a cast
doesn't work.  Other symbolic constants that might be defined using
[U]INT64CONST are likewise safer than before.

Also fix the SIZE_MAX macro to be similarly safe, if we are forced
to provide a definition for that.  The test added in commit 2e70d6b5e
happens to do what we want even with the hack "(size_t) -1" definition,
but we could easily get burnt on other tests in future.

Back-patch to all supported branches, like the previous commits.

Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us
2017-09-01 15:14:46 -04:00
Peter Eisentraut a057220353 doc: Remove mentions of server-side CRL and CA file names
Commit a445cb92ef removed the default file
names for server-side CRL and CA files, but left them in the docs with a
small note.  This removes the note and the previous default names to
clarify, as well as changes mentions of the file names to make it
clearer that they are configurable.

Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2017-09-01 14:18:45 -04:00
Tom Lane b79d69b087 Ensure SIZE_MAX can be used throughout our code.
Pre-C99 platforms may lack <stdint.h> and thereby SIZE_MAX.  We have
a couple of places using the hack "(size_t) -1" as a fallback, but
it wasn't universally available; which means the code added in commit
2e70d6b5e fails to compile everywhere.  Move that hack to c.h so that
we can rely on having SIZE_MAX everywhere.

Per discussion, it'd be a good idea to make the macro's value safe
for use in #if-tests, but that will take a bit more work.  This is
just a quick expedient to get the buildfarm green again.

Back-patch to all supported branches, like the previous commit.

Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us
2017-09-01 13:52:53 -04:00
Robert Haas 84be67181a pg_dumpall: Add a -E flag to set the encoding, like pg_dump has.
Michael Paquier, reviewed by Fabien Coelho

Discussion: http://postgr.es/m/CAB7nPqQcYWmrm2n-dVaMUhYPKFU_DxQwPuUGuC4ZF+8B=dS5xQ@mail.gmail.com
2017-09-01 12:24:20 -04:00
Simon Riggs 2f5ada2710 Provisional list of Major Features 2017-09-01 17:20:18 +01:00
Robert Haas baaf272ac9 Use group updates when setting transaction status in clog.
Commit 0e141c0fbb introduced a mechanism
to reduce contention on ProcArrayLock by having a single process clear
XIDs in the procArray on behalf of multiple processes, reducing the
need to hand the lock around.  A previous attempt to introduce a similar
mechanism for CLogControlLock in ccce90b398
crashed and burned, but the design problem which resulted in those
failures is believed to have been corrected in this version.

Amit Kapila, with some cosmetic changes by me.  See the previous commit
message for additional credits.

Discussion: http://postgr.es/m/CAA4eK1KudxzgWhuywY_X=yeSAhJMT4DwCjroV5Ay60xaeB2Eew@mail.gmail.com
2017-09-01 11:45:40 -04:00
Alvaro Herrera 89c59b742a Fix two-phase commit test for recovery mode
The original code had a race condition because it never ensured the
standby was caught up before proceeding; add a wait similar to every
other place that does this.

Author: Michaël Paquier
Discussion: https://postgr.es/m/CAB7nPqTm9p+LCm1mVJYvgpwagRK+uibT-pKq0O2-paOWxT62jw@mail.gmail.com
2017-09-01 16:51:55 +02:00
Alvaro Herrera a6979c3a68 Restore behavior for replication origin drop
Do for replication origins what the previous commit did for replication
slots: restore the original behavior of replication origin drop to raise
an error rather than blocking, because users might be depending on the
original behavior.  Maintain the blocking behavior when invoked
internally from logical replication subscription handling.

Discussion: https://postgr.es/m/20170830133922.tlpo3lgfejm4n2cs@alvherre.pgsql
2017-09-01 16:30:02 +02:00
Simon Riggs 4f27c674fd Avoid race condition in logical replication test
Wait for slot to become inactive before continuing.

Author: Petr Jelinek
2017-09-01 14:55:44 +01:00
Alvaro Herrera be7161566d Add a WAIT option to DROP_REPLICATION_SLOT
Commit 9915de6c1c changed the default behavior of
DROP_REPLICATION_SLOT so that it would wait until any session holding
the slot active would release it, instead of raising an error.  But
users are already depending on the original behavior, so revert to it by
default and add a WAIT option to invoke the new behavior.

Per complaint from Simone Gotti, in
Discussion: https://postgr.es/m/CAEvsy6Wgdf90O6pUvg2wSVXL2omH5OPC-38OD4Zzgk-FXavj3Q@mail.gmail.com
2017-09-01 13:44:14 +02:00
Simon Riggs abe85ef1d0 Add note about diskspace usage of pg_commit_ts
Author: Thomas Munro
2017-09-01 07:57:05 +01:00
Robert Haas 7b69b6ceb8 Fix assorted carelessness about Datum vs. int64 vs. uint64
Bugs introduced by commit 81c5e46c49
2017-09-01 00:14:54 -04:00
Robert Haas 0d9506d125 Try to repair poorly-considered code in previous commit. 2017-08-31 23:09:00 -04:00
Robert Haas 81c5e46c49 Introduce 64-bit hash functions with a 64-bit seed.
This will be useful for hash partitioning, which needs a way to seed
the hash functions to avoid problems such as a hash index on a hash
partitioned table clumping all values into a small portion of the
bucket space; it's also useful for anything that wants a 64-bit hash
value rather than a 32-bit hash value.

Just in case somebody wants a 64-bit hash value that is compatible
with the existing 32-bit hash values, make the low 32-bits of the
64-bit hash value match the 32-bit hash value when the seed is 0.

Robert Haas and Amul Sul

Discussion: http://postgr.es/m/CA+Tgmoafx2yoJuhCQQOL5CocEi-w_uG4S2xT0EtgiJnPGcHW3g@mail.gmail.com
2017-08-31 22:21:21 -04:00
Tom Lane 2d44c58c79 Avoid memory leaks when a GatherMerge node is rescanned.
Rescanning a GatherMerge led to leaking some memory in the executor's
query-lifespan context, because most of the node's working data structures
were simply abandoned and rebuilt from scratch.  In practice, this might
never amount to much, given the cost of relaunching worker processes ---
but it's still pretty messy, so let's fix it.

We can rearrange things so that the tuple arrays are simply cleared and
reused, and we don't need to rebuild the TupleTableSlots either, just
clear them.  One small complication is that because we might get a
different number of workers on each iteration, we can't keep the old
convention that the leader's gm_slots[] entry is the last one; the leader
might clobber a TupleTableSlot that we need for a worker in a future
iteration.  Hence, adjust the logic so that the leader has slot 0 always,
while the active workers have slots 1..n.

Back-patch to v10 to keep all the existing versions of nodeGatherMerge.c
in sync --- because of the renumbering of the slots, there would otherwise
be a very large risk that any future backpatches in this module would
introduce bugs.

Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
2017-08-31 16:21:05 -04:00
Robert Haas 30833ba154 Expand partitioned tables in PartDesc order.
Previously, we expanded the inheritance hierarchy in the order in
which find_all_inheritors had locked the tables, but that turns out
to block quite a bit of useful optimization.  For example, a
partition-wise join can't count on two tables with matching bounds
to get expanded in the same order.

Where possible, this change results in expanding partitioned tables in
*bound* order.  Bound order isn't well-defined for a list-partitioned
table with a null-accepting partition or for a list-partitioned table
where the bounds for a single partition are interleaved with other
partitions.  However, when expansion in bound order is possible, it
opens up further opportunities for optimization, such as
strength-reducing MergeAppend to Append when the expansion order
matches the desired sort order.

Patch by me, with cosmetic revisions by Ashutosh Bapat.

Discussion: http://postgr.es/m/CA+TgmoZrKj7kEzcMSum3aXV4eyvvbh9WD=c6m=002WMheDyE3A@mail.gmail.com
2017-08-31 15:50:18 -04:00
Tom Lane 6708e447ef Clean up shm_mq cleanup.
The logic around shm_mq_detach was a few bricks shy of a load, because
(contrary to the comments for shm_mq_attach) all it did was update the
shared shm_mq state.  That left us leaking a bit of process-local
memory, but much worse, the on_dsm_detach callback for shm_mq_detach
was still armed.  That means that whenever we ultimately detach from
the DSM segment, we'd run shm_mq_detach again for already-detached,
possibly long-dead queues.  This accidentally fails to fail today,
because we only ever re-use a shm_mq's memory for another shm_mq, and
multiple detach attempts on the last such shm_mq are fairly harmless.
But it's gonna bite us someday, so let's clean it up.

To do that, change shm_mq_detach's API so it takes a shm_mq_handle
not the underlying shm_mq.  This makes the callers simpler in most
cases anyway.  Also fix a few places in parallel.c that were just
pfree'ing the handle structs rather than doing proper cleanup.

Back-patch to v10 because of the risk that the revenant shm_mq_detach
callbacks would cause a live bug sometime.  Since this is an API
change, it's too late to do it in 9.6.  (We could make a variant
patch that preserves API, but I'm not excited enough to do that.)

Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
2017-08-31 15:10:24 -04:00
Tom Lane 4b1dd62a25 Improve code coverage of select_parallel test.
Make sure that rescans of parallel indexscans are tested.
Per code coverage report.
2017-08-31 13:15:54 -04:00
Peter Eisentraut b5c75feca7 Remove to pre-8.2 coding convention for PG_MODULE_MAGIC
PG_MODULE_MAGIC has been around since 8.2, with 8.1 long since in EOL,
so remove the mention of #ifdef guards for compiling against pre-8.2
sources from the documentation.

Author: Daniel Gustafsson <daniel@yesql.se>
2017-08-30 22:40:24 -04:00
Tom Lane 04e9678614 Code review for nodeGatherMerge.c.
Comment the fields of GatherMergeState, and organize them a bit more
sensibly.  Comment GMReaderTupleBuffer more usefully too.  Improve
assorted other comments that were obsolete or just not very good English.

Get rid of the use of a GMReaderTupleBuffer for the leader process;
that was confusing, since only the "done" field was used, and that
in a way redundant with need_to_scan_locally.

In gather_merge_init, avoid calling load_tuple_array for
already-known-exhausted workers.  I'm not sure if there's a live bug there,
but the case is unlikely to be well tested due to timing considerations.

Remove some useless code, such as duplicating the tts_isempty test done by
TupIsNull.

Remove useless initialization of ps.qual, replacing that with an assertion
that we have no qual to check.  (If we did, the code would fail to check
it.)

Avoid applying heap_copytuple to a null tuple.  While that fails to crash,
it's confusing and it makes the code less legible not more so IMO.

Propagate a couple of these changes into nodeGather.c, as well.

Back-patch to v10, partly because of the possibility that the
gather_merge_init change is fixing a live bug, but mostly to keep
the branches in sync to ease future bug fixes.
2017-08-30 17:21:08 -04:00
Tom Lane 41b0dd987d Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes.  This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call).  That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized.  Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.

Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper.  ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.

As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.

Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 13:18:16 -04:00
Tom Lane 6c2c5bea3c Restore test case from a2b70c89ca.
Revert the reversion commits a20aac890 and 9b644745c.  In the wake of
commit 7df2c1f8d, we should get stable buildfarm results from this test;
if not, I'd like to know sooner not later.

Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 09:59:23 -04:00
Tom Lane 7df2c1f8da Force rescanning of parallel-aware scan nodes below a Gather[Merge].
The ExecReScan machinery contains various optimizations for postponing
or skipping rescans of plan subtrees; for example a HashAgg node may
conclude that it can re-use the table it built before, instead of
re-reading its input subtree.  But that is wrong if the input contains
a parallel-aware table scan node, since the portion of the table scanned
by the leader process is likely to vary from one rescan to the next.
This explains the timing-dependent buildfarm failures we saw after
commit a2b70c89c.

The established mechanism for showing that a plan node's output is
potentially variable is to mark it as depending on some runtime Param.
Hence, to fix this, invent a dummy Param (one that has a PARAM_EXEC
parameter number, but carries no actual value) associated with each Gather
or GatherMerge node, mark parallel-aware nodes below that node as dependent
on that Param, and arrange for ExecReScanGather[Merge] to flag that Param
as changed whenever the Gather[Merge] node is rescanned.

This solution breaks an undocumented assumption made by the parallel
executor logic, namely that all rescans of nodes below a Gather[Merge]
will happen synchronously during the ReScan of the top node itself.
But that's fundamentally contrary to the design of the ExecReScan code,
and so was doomed to fail someday anyway (even if you want to argue
that the bug being fixed here wasn't a failure of that assumption).
A follow-on patch will address that issue.  In the meantime, the worst
that's expected to happen is that given very bad timing luck, the leader
might have to do all the work during a rescan, because workers think
they have nothing to do, if they are able to start up before the eventual
ReScan of the leader's parallel-aware table scan node has reset the
shared scan state.

Although this problem exists in 9.6, there does not seem to be any way
for it to manifest there.  Without GatherMerge, it seems that a plan tree
that has a rescan-short-circuiting node below Gather will always also
have one above it that will short-circuit in the same cases, preventing
the Gather from being rescanned.  Hence we won't take the risk of
back-patching this change into 9.6.  But v10 needs it.

Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 09:29:55 -04:00
Peter Eisentraut 00f6d5c2c3 doc: Avoid sidebar element
The formatting of the sidebar element didn't carry over to the new tool
chain.  Instead of inventing a whole new way of dealing with it, just
convert the one use to a "note".
2017-08-29 19:33:24 -04:00
Tom Lane 6cbee65eee Doc: document libpq's restriction to INT_MAX rows in a PGresult.
As long as PQntuples, PQgetvalue, etc, use "int" for row numbers, we're
pretty much stuck with this limitation.  The documentation formerly stated
that the result of PQntuples "might overflow on 32-bit operating systems",
which is just nonsense: that's not where the overflow would happen, and
if you did reach an overflow it would not be on a 32-bit machine, because
you'd have OOM'd long since.

Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com
2017-08-29 15:38:05 -04:00
Tom Lane 2e70d6b5e9 Teach libpq to detect integer overflow in the row count of a PGresult.
Adding more than 1 billion rows to a PGresult would overflow its ntups and
tupArrSize fields, leading to client crashes.  It'd be desirable to use
wider fields on 64-bit machines, but because all of libpq's external APIs
use plain "int" for row counters, that's going to be hard to accomplish
without an ABI break.  Given the lack of complaints so far, and the general
pain that would be involved in using such huge PGresults, let's settle for
just preventing the overflow and reporting a useful error message if it
does happen.  Also, for a couple more lines of code we can increase the
threshold of trouble from INT_MAX/2 to INT_MAX rows.

To do that, refactor pqAddTuple() to allow returning an error message that
replaces the default assumption that it failed because of out-of-memory.

Along the way, fix PQsetvalue() so that it reports all failures via
pqInternalNotice().  It already did so in the case of bad field number,
but neglected to report anything for other error causes.

Because of the potential for crashes, this seems like a back-patchable
bug fix, despite the lack of field reports.

Michael Paquier, per a complaint from Igor Korot.

Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com
2017-08-29 15:18:01 -04:00
Robert Haas bf11e7ee2e Propagate sort instrumentation from workers back to leader.
Up until now, when parallel query was used, no details about the
sort method or space used by the workers were available; details
were shown only for any sorting done by the leader.  Fix that.

Commit 1177ab1dab forced the test case
added by commit 1f6d515a67 to run
without parallelism; now that we have this infrastructure, allow
that again, with a little tweaking to make it pass with and without
force_parallel_mode.

Robert Haas and Tom Lane

Discussion: http://postgr.es/m/CA+Tgmoa2VBZW6S8AAXfhpHczb=Rf6RqQ2br+zJvEgwJ0uoD_tQ@mail.gmail.com
2017-08-29 13:26:33 -04:00
Robert Haas 3452dc5240 Push tuple limits through Gather and Gather Merge.
If we only need, say, 10 tuples in total, then we certainly don't need
more than 10 tuples from any single process.  Pushing down the limit
lets workers exit early when possible.  For Gather Merge, there is
an additional benefit: a Sort immediately below the Gather Merge can
be done as a bounded sort if there is an applicable limit.

Robert Haas and Tom Lane

Discussion: http://postgr.es/m/CA+TgmoYa3QKKrLj5rX7UvGqhH73G1Li4B-EKxrmASaca2tFu9Q@mail.gmail.com
2017-08-29 13:16:55 -04:00
Tom Lane ce5dcf54b9 Improve docs about numeric formatting patterns (to_char/to_number).
The explanation about "0" versus "9" format characters was confusing
and arguably wrong; the discussion of sign handling wasn't very good
either.  Notably, while it's accurate to say that "FM" strips leading
zeroes in date/time values, what it really does with numeric values
is to strip *trailing* zeroes, and then only if you wrote "9" rather
than "0".  Per gripes from Erwin Brandstetter.

Discussion: https://postgr.es/m/CAGHENJ7jgRbTn6nf48xNZ=FHgL2WQ4X8mYsUAU57f-vq8PubEw@mail.gmail.com
Discussion: https://postgr.es/m/CAGHENJ45ymd=GOCu1vwV9u7GmCR80_5tW0fP9C_gJKbruGMHvQ@mail.gmail.com
2017-08-29 09:34:21 -04:00
Tom Lane dc4f356b44 Doc: adjust release-note credit for parallel pg_restore fix.
Discussion: https://postgr.es/m/CAFcNs+pJ6_Ud-zg3vY_Y0mzfESdM34Humt8avKrAKq_H+v18Cg@mail.gmail.com
2017-08-28 11:40:47 -04:00
Tom Lane 95e28b7f0c Fix over-aggressive sanity check in misc_sanity.sql.
Fix thinko in commit 8be8510cf: it's okay to have dbid == 0 in normal
(non-pin) entries in pg_shdepend, because global objects such as
databases are entered that way.  The test would pass so long as it
was run in a cluster containing no databases/tablespaces owned by,
or granted to, roles other than the bootstrap superuser.  That's the
expected situation for "make check", but for "make installcheck", not
so much.

Reported by Ryan Murphy.

Discussion: https://postgr.es/m/CAHeEsBc6EQe0mxGBKDXAwJbntgfvoAd5MQC-5362SmC3Tng_6g@mail.gmail.com
2017-08-28 10:14:46 -04:00
Peter Eisentraut 46596f8d6b Clarify documentation
Discussion: https://www.postgresql.org/message-id/flat/20170618071607.GA16418%40nol.local
2017-08-27 21:29:54 -04:00
Tom Lane f97c55c708 Release notes for 9.6.5, 9.5.9, 9.4.14, 9.3.19, 9.2.23. 2017-08-27 17:35:04 -04:00