Commit Graph

46146 Commits

Author SHA1 Message Date
Alvaro Herrera 9dc1225855 Silence compiler warning
My original coding was questionable anyway.

Reported-by: Sergei Kornilov
Discussion: https://postgr.es/m/9645101543575886@myt6-27270b78ac4f.qloud-c.yandex.net
2018-11-30 10:20:49 -03:00
Amit Kapila dcfdf56e89 Fix typo. 2018-11-30 11:50:43 +05:30
Michael Paquier 5c99513975 Fix various checksum check problems for pg_verify_checksums and base backups
Three issues are fixed in this patch:
- Base backups forgot to ignore files specific to EXEC_BACKEND, leading
to spurious warnings when checksums are enabled, per analysis from me.
- pg_verify_checksums forgot about files specific to EXEC_BACKEND,
leading to failures of the tool on any such build, particularly Windows.
This error was originally found by newly-introduced TAP tests in various
buildfarm members using EXEC_BACKEND.
- pg_verify_checksums forgot to count for temporary files and temporary
paths, which could be valid relation files, without checksums, per
report from Andres Freund.  More tests are added to cover this case.

A new test case which emulates corruption for a file in a different
tablespace is added, coming from from Michael Banck, while I have coded
the main code and refactored the test code.

Author: Michael Banck, Michael Paquier
Reviewed-by: Stephen Frost, David Steele
Discussion: https://postgr.es/m/20181021134206.GA14282@paquier.xyz
2018-11-30 10:34:45 +09:00
Michael Paquier a1c91dd110 Switch pg_verify_checksums back to a blacklist
This basically reverts commit d55241af70,
leaving around a portion of the regression tests still adapted with
empty relation files, and corrupted cases.  This is also proving to be
failing to check properly relation files located in a non-default
tablespace path.

Per discussion with various folks, including Stephen Frost, David
Steele, Andres Freund, Michael Banck and myself.

Reported-by: Michael Banck
Discussion: https://postgr.es/m/20181021134206.GA14282@paquier.xyz
Backpatch-through: 11
2018-11-30 10:14:58 +09:00
Tom Lane d328991578 Document handling of invalid/ambiguous timestamp input near DST boundaries.
The source code comments documented this, but the user-facing docs, not
so much.  Add a section to Appendix B that discusses it.

In passing, improve a couple other things in Appendix B --- notably,
a long-obsolete claim that time zone abbreviations are looked up in
a fixed table.

Per bug #15527 from Michael Davidson.

Discussion: https://postgr.es/m/15527-f1be0b4dc99ebbe7@postgresql.org
2018-11-29 18:28:10 -05:00
Alvaro Herrera 88bdbd3f74 Add log_statement_sample_rate parameter
This allows to set a lower log_min_duration_statement value without
incurring excessive log traffic (which reduces performance).  This can
be useful to analyze workloads with lots of short queries.

Author: Adrien Nayrat
Reviewed-by: David Rowley, Vik Fearing
Discussion: https://postgr.es/m/c30ee535-ee1e-db9f-fa97-146b9f62caed@anayrat.info
2018-11-29 18:42:53 -03:00
Tom Lane 826eff57c4 Ensure static libraries have correct mod time even if ranlib messes it up.
In at least Apple's version of ranlib, the output file is updated to have
a mod time equal to the max of the timestamps of its components, and that
data only has seconds precision.  On a filesystem with sub-second file
timestamp precision --- say, APFS --- this can result in the finished
static library appearing older than its input files, which causes useless
rebuilds and possible outright failures in parallel makes.

We've only seen this reported in the field from people using Apple's
ranlib with a non-Apple make, because Apple's make doesn't know about
sub-second timestamps either so it doesn't decide rebuilds are needed.
But Apple's ranlib presumably shares code with at least some BSDen,
so it's not that unlikely that the same problem could arise elsewhere.

To fix, just "touch" the output file after ranlib finishes.

We seem to need this in only one place.  There are other calls of
ranlib in our makefiles, but they are working on intermediate files
whose timestamps are not actually important, or else on an installed
static library for which sub-second timestamp precision is unlikely
to matter either.  (Also, so far as I can tell, Apple's ranlib doesn't
mess up the file timestamp in the latter usage anyhow.)

In passing, change "ranlib" to "$(RANLIB)" in one place that was
bypassing the make macro for no good reason.

Per bug #15525 from Jack Kelly (via Alyssa Ross).
Back-patch to all supported branches.

Discussion: https://postgr.es/m/15525-a30da084f17a1faa@postgresql.org
2018-11-29 15:53:44 -05:00
Peter Eisentraut 68120427f4 doc: Add appendix detailing some limits of PostgreSQL
This used to be on the web site but was removed.  The documentation is
a better place for it anyway.

Author: David Rowley <david.rowley@2ndquadrant.com>
Reviewed-by: John Naylor <jcnaylor@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAKJS1f_dKdejdKB94nKZC9S5NzB-UZRcAKkE84e=JEEecDuotg@mail.gmail.com/
2018-11-29 14:01:11 +01:00
Michael Paquier 44e22647f8 Add pg_partition_tree to documentation index
This fixes an oversight from d5eec4ee.

Author: Michael Paquier
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/20181129072719.GC9004@paquier.xyz
2018-11-29 18:00:08 +09:00
Michael Paquier 431f1599a2 Add support for NO_INSTALLCHECK in MSVC scripts
When fetching a list of tests for a given extension in contrib/ or
src/test/modules/, NO_INSTALLCHECK now gets checked first.  If present,
an empty list of tests is returned to let the caller know that tests
for this module need to be bypassed.

This actually fixes a set of issues with MSVC with modules using
REGRESS_OPTS, as an incorrect parsing caused the launched command
to eat the first test listed.  The actual effect on the tree is that
several modules listed a single test, so regressions have been running
with no actual tests.  pg_stat_statements, test_rls_hooks and commit_ts
were impacted by that.  Some other modules like test_decoding (or
snapshot_too_old) don't use yet PGXS rules, but their makefiles will
soon be refactored with an upcoming patch.

Author: Michael Paquier
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/20181126054302.GI1776@paquier.xyz
2018-11-29 10:31:12 +09:00
Thomas Munro 2ac180c286 Fix minor typo in dsa.c.
Author: Takeshi Ideriha
Discussion: https://postgr.es/m/4E72940DA2BF16479384A86D54D0988A6F3BF22D%40G01JPEXMBKW04
2018-11-29 14:14:26 +13:00
Michael Paquier d79fb5d237 Add missing NO_INSTALLCHECK in commit_ts and test_rls_hooks
This bypasses installcheck if specified, which makes sense for those
modules as they require non-default configuration, something which
typical users don't have.  Those have been missing from the start, still
no back-patch is done.

This will be used by an upcoming patch for MSVC scripts adding support
for NO_INSTALLCHECK as installcheck is the default mode for contrib and
modules for performance reasons in the buildfarm.

Author: Michael Paquier
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/20181126054302.GI1776@paquier.xyz
2018-11-29 09:39:07 +09:00
Michael Paquier 4c703369af Fix handling of synchronous replication for stopping WAL senders
This fixes an oversight from c6c3334 which forgot that if a subset of
WAL senders are stopping and in a sync state, other WAL senders could
still be waiting for a WAL position to be synced while committing a
transaction.  However the subset of stopping senders would not release
waiters, potentially breaking synchronous replication guarantees.  This
commit makes sure that even WAL senders stopping are able to release
waiters and are tracked properly.

On 9.4, this can also trigger an assertion failure when setting for
example max_wal_senders to 1 where a WAL sender is not able to find
itself as in synchronous state when the instance stops.

Reported-by: Paul Guo
Author: Paul Guo, Michael Paquier
Discussion: https://postgr.es/m/CAEET0ZEv8VFqT3C-cQm6byOB4r4VYWcef1J21dOX-gcVhCSpmA@mail.gmail.com
Backpatch-through: 9.4
2018-11-29 09:12:19 +09:00
Peter Geoghegan 1a990b207b Have BufFileSize() ereport() on FileSize() failure.
Move the responsibility for checking for and reporting a failure from
the only current BufFileSize() caller, logtape.c, to BufFileSize()
itself.  Code within buffile.c is generally responsible for interfacing
with fd.c to report irrecoverable failures.  This seems like a
convention that's worth sticking to.

Reorganizing things this way makes it easy to make the error message
raised in the event of BufFileSize() failure descriptive of the
underlying problem.  We're now clear on the distinction between
temporary file name and BufFile name, and can show errno, confident that
its value actually relates to the error being reported.  In passing, an
existing, similar buffile.c ereport() + errcode_for_file_access() site
is changed to follow the same conventions.

The API of the function BufFileSize() is changed by this commit, despite
already being in a stable release (Postgres 11).  This seems acceptable,
since the BufFileSize() ABI was changed by commit aa55183042, which
hasn't made it into a point release yet.  Besides, it's difficult to
imagine a third party BufFileSize() caller not just raising an error
anyway, since BufFile state should be considered corrupt when
BufFileSize() fails.

Per complaint from Tom Lane.

Discussion: https://postgr.es/m/26974.1540826748@sss.pgh.pa.us
Backpatch: 11-, where shared BufFiles were introduced.
2018-11-28 14:42:54 -08:00
Peter Eisentraut f2cbffc7a6 Only allow one recovery target setting
The previous recovery.conf regime accepted multiple recovery_target*
settings and used the last one.  This does not translate well to the
general GUC system.  Specifically, under EXEC_BACKEND, the settings
are written out not in any particular order, so the order in which
they were originally set is not available to new processes.

Rather than redesign the GUC system, it was decided to abandon the old
behavior and only allow one recovery target setting.  A second setting
will cause an error.  However, it is allowed to set the same parameter
multiple times or unset a parameter and set a different one.

Discussion: https://www.postgresql.org/message-id/flat/27802171543235530%40iva2-6ec8f0a6115e.qloud-c.yandex.net#701a59c837ad0bf8c244344aaf3ef5a4
2018-11-28 13:55:54 +01:00
Bruce Momjian eae9143d9a C comment: remove extra '*'
Reported-by: Etsuro Fujita

Discussion: https://postgr.es/m/5BFE34DE.1080404@lab.ntt.co.jp

Author: Etsuro Fujita

Backpatch-through: 10
2018-11-28 07:34:10 -05:00
Thomas Munro 0f9cdd7dca Don't set PAM_RHOST for Unix sockets.
Since commit 2f1d2b7a we have set PAM_RHOST to "[local]" for Unix
sockets.  This caused Linux PAM's libaudit integration to make DNS
requests for that name.  It's not exactly clear what value PAM_RHOST
should have in that case, but it seems clear that we shouldn't set it
to an unresolvable name, so don't do that.

Back-patch to 9.6.  Bug #15520.

Author: Thomas Munro
Reviewed-by: Peter Eisentraut
Reported-by: Albert Schabhuetl
Discussion: https://postgr.es/m/15520-4c266f986998e1c5%40postgresql.org
2018-11-28 14:12:30 +13:00
Tomas Vondra f69c959df0 Do not decode TOAST data for table rewrites
During table rewrites (VACUUM FULL and CLUSTER), the main heap is logged
using XLOG / FPI records, and thus (correctly) ignored in decoding.
But the associated TOAST table is WAL-logged as plain INSERT records,
and so was logically decoded and passed to reorder buffer.

That has severe consequences with TOAST tables of non-trivial size.
Firstly, reorder buffer has to keep all those changes, possibly spilling
them to a file, incurring I/O costs and disk space.

Secondly, ReoderBufferCommit() was stashing all those TOAST chunks into
a hash table, which got discarded only after processing the row from the
main heap.  But as the main heap is not decoded for rewrites, this never
happened, so all the TOAST data accumulated in memory, resulting either
in excessive memory consumption or OOM.

The fix is simple, as commit e9edc1ba already introduced infrastructure
(namely HEAP_INSERT_NO_LOGICAL flag) to skip logical decoding of TOAST
tables, but it only applied it to system tables.  So simply use it for
all TOAST data in raw_heap_insert().

That would however solve only the memory consumption issue - the TOAST
changes would still be decoded and added to the reorder buffer, and
spilled to disk (although without TOAST tuple data, so much smaller).
But we can solve that by tweaking DecodeInsert() to just ignore such
INSERT records altogether, using XLH_INSERT_CONTAINS_NEW_TUPLE flag,
instead of skipping them later in ReorderBufferCommit().

Review: Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/flat/1a17c643-e9af-3dba-486b-fbe31bc1823a%402ndquadrant.com
Backpatch: 9.4-, where logical decoding was introduced
2018-11-28 01:43:08 +01:00
Tomas Vondra d1ce4ed2d5 Use wildcard to match parens after CREATE STATISTICS
CREATE STATISTICS completion was checking manually for the start and end
of the parenthesised list of types. That works, but we now have a better
way to do that as commit 121213d9d taught word_matches() to allow '*' in
the middle of an alternative. But it only applied that to tab completion
for EXPLAIN, ANALYZE and VACUUM. Use it for CREATE STATISTICS too.

Author: Dagfinn Ilmari Mannsåker
Discussion: https://www.postgresql.org/message-id/flat/d8jwooziy1s.fsf%40dalvik.ping.uio.no
2018-11-28 00:48:51 +01:00
Thomas Munro d67dae036b Don't count zero-filled buffers as 'read' in EXPLAIN.
If you extend a relation, it should count as a block written, not
read (we write a zero-filled block).  If you ask for a zero-filled
buffer, it shouldn't be counted as read or written.

Later we might consider counting zero-filled buffers with a separate
counter, if they become more common due to future work.

Author: Thomas Munro
Reviewed-by: Haribabu Kommi, Kyotaro Horiguchi, David Rowley
Discussion: https://postgr.es/m/CAEepm%3D3JytB3KPpvSwXzkY%2Bdwc5zC8P8Lk7Nedkoci81_0E9rA%40mail.gmail.com
2018-11-28 11:58:10 +13:00
Andres Freund 471a7af585 Ensure consistent sort order of large objects in pg_dump.
The primary purpose of this commit is to ensure pg_upgrade tests yield
comparable dumps pre/post upgrade, which got broken by 12a53c732 /
578b229718, as the order in pg_largeobject_metadata is likely to
differ pre/post upgrade.

It also seems like a generally good idea to make sure such dumps are
comparable, outside of pg_upgrade tests.

LO metadata already was already dumped in an ordered manner as the
metadata is dumped in a well defined order via
sortDumpableObjectsByTypeName() and sortDumpableObjects(). But large
object data is currently not tracked via that mechanism.

As Tom points out it seems possible that at some point dumpBlobs() was
assumed to dump out objects in a well defined order, due to the use of
DISTINCT, which at that time only was done using sorting.

Per complaint from Andrew Dunstan and discussion with him and Tom
Lane.

Author: Andres Freund
Discussion: https://postgr.es/m/2735.1543333649@sss.pgh.pa.us
2018-11-27 12:16:55 -08:00
Andres Freund b238527664 Fix jit compilation bug on wide tables.
The function generated to perform JIT compiled tuple deforming failed
when HeapTupleHeader's t_hoff was bigger than a signed int8. I'd
failed to realize that LLVM's getelementptr would treat an int8 index
argument as signed, rather than unsigned.  That means that a hoff
larger than 127 would result in a negative offset being applied.  Fix
that by widening the index to 32bit.

Add a testcase with a wide table. Don't drop it, as it seems useful to
verify other tools deal properly with wide tables.

Thanks to Justin Pryzby for both reporting a bug and then reducing it
to a reproducible testcase!

Reported-By: Justin Pryzby
Author: Andres Freund
Discussion: https://postgr.es/m/20181115223959.GB10913@telsasoft.com
Backpatch: 11, just as jit compilation was
2018-11-27 10:07:03 -08:00
Peter Eisentraut f17889b221 Update ssl test certificates and keys
Debian testing and newer now require that RSA and DHE keys are at
least 2048 bit long and no longer allow SHA-1 for signatures in
certificates.  This is currently causing the ssl tests to fail there
because the test certificates and keys have been created in violation
of those conditions.

Update the parameters to create the test files and create a new set of
test files.

Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
Reported-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/20180917131340.GE31460%40paquier.xyz
2018-11-27 15:16:14 +01:00
Andres Freund 4c8750a9cc Fix ac218aa4f6 to work on versions before 9.5.
Unfortunately ac218aa4f6 missed the fact that a reference to
'pg_catalog.regnamespace'::regclass wouldn't work before that type is
known. Fix that, by replacing the regtype usage with a join to
pg_type.

Reported-By: Tom Lane
Author: Andres Freund
Discussion: https://postgr.es/m/8863.1543297423@sss.pgh.pa.us
Backpatch: 9.5-, like ac218aa4f6
2018-11-26 23:26:05 -08:00
Andres Freund ac218aa4f6 Update pg_upgrade test for reg* to include regrole and regnamespace.
When the regrole (0c90f6769) and regnamespace (cb9fa802b) types were
added in 9.5, pg_upgrade's check for reg* types wasn't updated. While
regrole currently is safe, regnamespace is not.

It seems unlikely that anybody uses regnamespace inside catalog tables
across a pg_upgrade, but the tests should be correct nevertheless.

While at it, reorder the types checked in the query to be
alphabetical. Otherwise it's annoying to compare existing and tested
for types.

Author: Andres Freund
Discussion: https://postgr.es/m/037e152a-cb25-3bcb-4f35-bdc9988f8204@2ndQuadrant.com
Backpatch: 9.5-, as regrole/regnamespace
2018-11-26 17:00:43 -08:00
Bruce Momjian 7a9d6779d9 doc: fix wording for plpgsql, add "and"
Reported-by: Anthony Greene

Discussion: https://postgr.es/m/CAPRNmnsSZ4QL75FUjcS8ND_oV+WjgyPbZ4ch2RUwmW6PWzF38w@mail.gmail.com

Backpatch-through: 9.4
2018-11-26 19:41:29 -05:00
Andres Freund 54bb22f66a Fix typo introduced in 578b229718.
Author: Andreas Karlsson
Discussion: https://postgr.es/m/0917c86f-e906-27c0-740e-abc581480823@proxel.se
2018-11-26 15:27:34 -08:00
Andres Freund 12a53c732c Fix pg_upgrade for oid removal.
pg_upgrade previously copied pg_largeobject_metadata over from the old
cluster. That doesn't work, because the table has oids before
578b229718. I missed that.

As most pieces of metadata for large objects already were dumped as
DDL (except for comments overwritten by pg_upgrade, due to the copy of
pg_largeobject_metadata) it seems reasonable to just also dump grants
for large objects.  If we ever consider this a relevant performance
problem, we'd need to fix the rest of the already emitted DDL
too.

There's still an open discussion about whether we'll want to force a
specific ordering for the dumped objects, as currently
pg_largeobjects_metadata potentially has a different ordering
before/after pg_upgrade, which can make automated testing a bit
harder.

Reported-By: Andrew Dunstan
Author: Andres Freund
Discussion: https://postgr.es/m/91a8a980-41bc-412b-fba2-2ba71a141c2b@2ndQuadrant.com
2018-11-26 14:37:08 -08:00
Tom Lane 70d7e507ef Fix translation of special characters in psql's LaTeX output modes.
latex_escaped_print() mistranslated \ and failed to provide any translation
for # ^ and ~, all of which would typically lead to LaTeX document syntax
errors.  In addition it didn't translate < > and |, which would typically
render as unexpected characters.

To some extent this represents shortcomings in ancient versions of LaTeX,
which if memory serves had no easy way to render these control characters
as ASCII text.  But that's been fixed for, um, decades.  In any case there
is no value in emitting guaranteed-to-fail output for these characters.

Noted while fooling with test cases added by commit 9a98984f4.  Back-patch
the code change to all supported versions.
2018-11-26 17:32:51 -05:00
Tom Lane 95dcb8fc05 Avoid locale-dependent output in numericlocale check.
I'd forgotten that in the buildfarm, parts of the regression tests
may run with psql exposed to a non-default LC_NUMERIC setting.
Hence we can't assume that C locale prevails, nor is there any
accessible way to force the setting for this single test step.
Lobotomize the test case added by commit 9a98984f4 so that it covers as
much as we can of print.c without having any locale-varying output.
2018-11-26 15:30:24 -05:00
Alvaro Herrera 67ed3b9d73 Fix sample output for hash_metapage_info query
One output column was duplicated.  Couldn't resist fixing the version
number while at it.

Reported-by: Gianni Ciolli
2018-11-26 17:24:14 -03:00
Tom Lane aa2ba50c2c Add CSV table output mode in psql.
"\pset format csv", or --csv, selects comma-separated values table format.
This is compliant with RFC 4180, except that we aren't too picky about
whether the record separator is LF or CRLF; also, the user may choose a
field separator other than comma.

This output format is directly compatible with the server's COPY CSV
format, and will also be useful as input to other programs.  It's
considerably safer for that purpose than the old recommendation to
use "unaligned" format, since the latter couldn't handle data
containing the field separator character.

Daniel Vérité, reviewed by Fabien Coelho and David Fetter, some
tweaking by me

Discussion: https://postgr.es/m/a8de371e-006f-4f92-ab72-2bbe3ee78f03@manitou-mail.org
2018-11-26 15:18:55 -05:00
Tom Lane 9a98984f49 Improve regression test coverage for psql output formats.
As penance for the "\pset format latex" silliness, add some regression
test coverage for the off-the-beaten-path output formats, which formerly
had exactly no coverage, except for some poorly-thought-out (unreadable,
repetitive, and incomplete) tests for asciidoc format.

I make no claims for the behavior exposed here actually being correct;
these test cases are just designed to ensure full code coverage in
fe_utils/print.c.  This brings the line coverage for that file up
from ~60% to ~93%.
2018-11-26 12:41:42 -05:00
Tom Lane a7eece4fc9 Fix breakage of "\pset format latex".
Commit eaf746a5b unintentionally made psql's "latex" output format
inaccessible, since not only "latex" but all abbreviations of it
were considered ambiguous against "latex-longtable".  Let's go
back to the longstanding behavior that all shortened versions
mean "latex", and you have to write at least "latex-" to get
"latex-longtable".  This leaves the only difference from pre-v12
behavior being that "\pset format a" is considered ambiguous.

The fact that the regression tests didn't expose this is pretty bad,
but fixing it is material for a separate commit.

Discussion: https://postgr.es/m/cb7e1caf-3ea6-450d-af28-f524903a030c@manitou-mail.org
2018-11-26 12:31:20 -05:00
Alvaro Herrera 36d442a25a Clarify that cross-row constraints are unsupported
Maybe we'll implement them later, or maybe not, but let's make the statu
quo clear for now.

Author: Lætitia Avrot, Patrick Francelle
Reviewers: too many to list
Discussion: https://postgr.es/m/CAB_COdhUuzNFOJfc7SNNso5rOuVA3ui93KMVunEM8Yih+K5A6A@mail.gmail.com
2018-11-26 12:38:19 -03:00
Michael Paquier 664f01b613 Revert "Fix typo in documentation of toast storage"
This reverts commit 058ef3a, per complains from Magnus Hagander and Vik
Fearing.
2018-11-26 16:43:19 +09:00
Michael Paquier 058ef3a1a8 Fix typo in documentation of toast storage
Author: Nawaz Ahmed
Discussion: https://postgr.es/m/154319327168.1315.1846953598601966513@wrigleys.postgresql.org
2018-11-26 15:49:23 +09:00
Michael Paquier 1d7dd18686 Revert all new recent changes to add PGXS options for TAP and isolation
A set of failures in buildfarm machines are proving that this is not
quite ready yet because of another set of issues:
- MSVC scripts assume that REGRESS_OPTS can only use top_builddir.  Some
test suites actually finish by using top_srcdir, like pg_stat_statements
which cause the regression tests to never run.
- Trying to enforce top_builddir does not work either when using VPATH
as this is not recognized properly.
- TAP tests of bloom are unstable on various platforms, causing various
failures.
2018-11-26 11:12:11 +09:00
Michael Paquier 3955cae0c5 Fix regression test handling of test_decoding with MSVC
The set of scripts in charge of running the regression tests for MSVC
run currently under the assumption that only $(top_builddir) can used in
option values defined in REGRESS_OPTS, and those options need to have a
specific format as well to be correctly parsed, so fix the Makefile
values so as those are correctly set.

Per complains from buildfarm member dory and whelk, with some extra
testing done on my side with MSVC to check this patch.
2018-11-26 10:49:49 +09:00
Michael Paquier b0b1f4183a Disable temporarily TAP tests for contrib/bloom/
The recent commit 03faa4a8 has enabled those tests, however several
buildfarm members are complaining about their stability on Windows and
macOS.  This will keep the buildfarm green, while investigating the
root problem.

Discussion: https://postgr.es/m/20181126003351.GE1776@paquier.xyz
2018-11-26 09:42:21 +09:00
Michael Paquier 03faa4a8dd Add PGXS options to control TAP and isolation tests
The following options are added for extensions:
- TAP_TESTS, to allow an extention to run TAP tests which are the ones
present in t/*.pl.  A subset of tests can always be run with the
existing PROVE_TESTS for developers.
- ISOLATION, to define a list of isolation tests.
- ISOLATION_OPTS, to pass custom options to isolation_tester.

A couple of custom Makefile targets have been accumulated across the
tree to cover the lack of facility in PGXS for a couple of releases when
using those test suites, which are all now replaced with the new flags,
without reducing the test coverage.  This also fixes an issue with
contrib/bloom/, which had a custom target to trigger its TAP tests of
its own not part of the main check runs.

Author: Michael Paquier
Reviewed-by: Adam Berlin, Álvaro Herrera, Tom Lane, Nikolay Shaplov,
Arthur Zakirov
Discussion: https://postgr.es/m/20180906014849.GG2726@paquier.xyz
2018-11-26 08:39:19 +09:00
Peter Eisentraut 2dedf4d9a8 Integrate recovery.conf into postgresql.conf
recovery.conf settings are now set in postgresql.conf (or other GUC
sources).  Currently, all the affected settings are PGC_POSTMASTER;
this could be refined in the future case by case.

Recovery is now initiated by a file recovery.signal.  Standby mode is
initiated by a file standby.signal.  The standby_mode setting is
gone.  If a recovery.conf file is found, an error is issued.

The trigger_file setting has been renamed to promote_trigger_file as
part of the move.

The documentation chapter "Recovery Configuration" has been integrated
into "Server Configuration".

pg_basebackup -R now appends settings to postgresql.auto.conf and
creates a standby.signal file.

Author: Fujii Masao <masao.fujii@gmail.com>
Author: Simon Riggs <simon@2ndquadrant.com>
Author: Abhijit Menon-Sen <ams@2ndquadrant.com>
Author: Sergei Kornilov <sk@zsrv.org>
Discussion: https://www.postgresql.org/message-id/flat/607741529606767@web3g.yandex.ru/
2018-11-25 16:33:40 +01:00
Thomas Munro ab69ea9fee Fix assertion failure for SSL connections.
Commit cfdf4dc4 added an assertion that every WaitLatch() or similar
handles postmaster death.  One place did not, but was missed in
review and testing due to the need for an SSL connection.  Fix, by
asking for WL_EXIT_ON_PM_DEATH.

Reported-by: Christoph Berg
Discussion: https://postgr.es/m/20181124143845.GA15039%40msg.df7cb.de
2018-11-25 18:34:58 +13:00
Andrew Gierth d5890f49da Fix hstore hash function for empty hstores upgraded from 8.4.
Hstore data generated on pg 8.4 and pg_upgraded to current versions
remains in its original on-disk format unless modified. The same goes
for values generated by the addon hstore-new module on pre-9.0
versions. (The hstoreUpgrade function converts old values on the fly
when read in, but the on-disk value is not modified by this.)

Since old-format empty hstores (and hstore-new hstores) have
representations compatible with the new format, hstoreUpgrade thought
it could get away without modifying such values; but this breaks
hstore_hash (and the new hstore_hash_extended) which assumes
bit-perfect matching between semantically identical hstore values.
Only one bit actually differs (the "new version" flag in the count
field) but that of course is enough to break the hash.

Fix by making hstoreUpgrade unconditionally convert all old values to
new format.

Backpatch all the way, even though this changes a hash value in some
cases, because in those cases the hash value is already failing - for
example, a hash join between old- and new-format empty hstores will be
failing to match, or a hash index on an hstore column containing an
old-format empty value will be failing to find the value since it will
be searching for a hash derived from a new-format datum. (There are no
known field reports of this happening, probably because hashing of
hstores has only been useful in limited circumstances and there
probably isn't much upgraded data being used this way.)

Per concerns arising from discussion of commit eb6f29141b. Original
bug is my fault.

Discussion: https://postgr.es/m/60b1fd3b-7332-40f0-7e7f-f2f04f777747%402ndquadrant.com
2018-11-24 09:59:49 +00:00
Andrew Gierth 757c5182f2 Avoid crashes in contrib/intarray gist__int_ops (bug #15518)
1. Integer overflow in internal_size could result in memory corruption
in decompression since a zero-length array would be allocated and then
written to. This leads to crashes or corruption when traversing an
index which has been populated with sufficiently sparse values. Fix by
using int64 for computations and checking for overflow.

2. Integer overflow in g_int_compress could cause pessimal merge
choices, resulting in unnecessarily large ranges (which would in turn
trigger issue 1 above). Fix by using int64 again.

3. Even without overflow, array sizes could become large enough to
cause unexplained memory allocation errors. Fix by capping the sizes
to a safe limit and report actual errors pointing at gist__intbig_ops
as needed.

4. Large inputs to the compression function always consist of large
runs of consecutive integers, and the compression loop was processing
these one at a time in an O(N^2) manner with a lot of overhead. The
expected runtime of this function could easily exceed 6 months for a
single call as a result. Fix by performing a linear-time first pass,
which reduces the worst case to something on the order of seconds.

Backpatch all the way, since this has been wrong forever.

Per bug #15518 from report from irc user "dymk", analysis and patch by
me.

Discussion: https://postgr.es/m/15518-799e426c3b4f8358@postgresql.org
2018-11-24 08:48:36 +00:00
Tom Lane 452b637d4b Adjust new test case for more portability.
Early returns from the buildfarm say that most critters are good with
commit cbdb8b4c0, but gaur gives unexpected results with the test case
involving a float8 that's one-ULP-less-than-2^63.  It appears that that
platform's version of rint() rounds that value up to 2^63 instead of
leaving it be.  This is possibly a bug, and it's also possible that no
other platform anybody is using anywhere behaves likewise.  Still, the
point of the test is not to insist that everybody's rint() behaves exactly
the same.  Let's use two-ULPs-less-than-2^63 instead, which I've tested
to act the same on gaur as on more modern hardware.

(This is, more or less, exactly the portability issue I'd feared might
arise...)

Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
2018-11-23 23:49:25 -05:00
Tom Lane cbdb8b4c01 Fix float-to-integer coercions to handle edge cases correctly.
ftoi4 and its sibling coercion functions did their overflow checks in
a way that looked superficially plausible, but actually depended on an
assumption that the MIN and MAX comparison constants can be represented
exactly in the float4 or float8 domain.  That fails in ftoi4, ftoi8,
and dtoi8, resulting in a possibility that values near the MAX limit will
be wrongly converted (to negative values) when they need to be rejected.

Also, because we compared before rounding off the fractional part,
the other three functions threw errors for values that really ought
to get rounded to the min or max integer value.

Fix by doing rint() first (requiring an assumption that it handles
NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already),
and by comparing to values that should coerce to float exactly, namely
INTxx_MIN and -INTxx_MIN.  Also remove some random cosmetic discrepancies
between these six functions.

Per bug #15519 from Victor Petrovykh.  This should get back-patched,
but first let's see what the buildfarm thinks of it --- I'm not too
sure about portability of some of the regression test cases.

Patch by me; thanks to Andrew Gierth for analysis and discussion.

Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
2018-11-23 20:57:11 -05:00
Tom Lane eb6f29141b Add a 64-bit hash function for type hstore.
There's some question about the correctness of the hash function, but
if it's wrong, the 32-bit version is also wrong.

Amul Sul, reviewed by Hironobu Suzuki

Discussion: https://postgr.es/m/CAAJ_b947JjnNr9Cp45iNjSqKf6PA5mCTmKsRwPjows93YwQrmw@mail.gmail.com
2018-11-23 13:37:34 -05:00
Tom Lane 48c41fa974 Add a 64-bit hash function for type citext.
Amul Sul, reviewed by Hironobu Suzuki

Discussion: https://postgr.es/m/CAAJ_b947JjnNr9Cp45iNjSqKf6PA5mCTmKsRwPjows93YwQrmw@mail.gmail.com
2018-11-23 13:24:45 -05:00
Tom Lane a314c34079 Clamp semijoin selectivity to be not more than inner-join selectivity.
We should never estimate the output of a semijoin to be more rows than
we estimate for an inner join with the same input rels and join condition;
it's obviously impossible for that to happen.  However, given the
relatively poor quality of our semijoin selectivity estimates ---
particularly, but not only, in cases where we punt and return a default
estimate --- we did often deliver such estimates.  To improve matters,
calculate both estimates inside eqjoinsel() and take the smaller one.

The bulk of this patch is just mechanical refactoring to avoid repetitive
information lookup when we call both eqjoinsel_semi and eqjoinsel_inner.
The actual new behavior is just

	selec = Min(selec, inner_rel->rows * selec_inner);

which looks a bit odd but is correct because of our different definitions
for inner and semi join selectivity.

There is one ensuing plan change in the regression tests, but it looks
reasonable enough (and checking the actual row counts shows that the
estimate moved closer to reality, not further away).

Per bug #15160 from Alexey Ermakov.  Although this is arguably a bug fix,
I won't risk destabilizing plan choices in stable branches by
back-patching.

Tom Lane, reviewed by Melanie Plageman

Discussion: https://postgr.es/m/152395805004.19366.3107109716821067806@wrigleys.postgresql.org
2018-11-23 12:48:49 -05:00