Commit Graph

1930 Commits

Author SHA1 Message Date
Heikki Linnakangas 37d2ff3803 pg_rewind: Refactor the abstraction to fetch from local/libpq source.
This makes the abstraction of a "source" server more clear, by introducing
a common abstract class, borrowing the object-oriented programming term,
that represents all the operations that can be done on the source server.
There are two implementations of it, one for fetching via libpq, and
another to fetch from a local directory. This adds some code, but makes it
easier to understand what's going on.

The copy_executeFileMap() and libpq_executeFileMap() functions contained
basically the same logic, just calling different functions to fetch the
source files. Refactor so that the common logic is in one place, in a new
function called perform_rewind().

Reviewed-by: Kyotaro Horiguchi, Soumyadeep Chakraborty
Discussion: https://www.postgresql.org/message-id/0c5b3783-af52-3ee5-f8fa-6e794061f70d%40iki.fi
2020-11-04 11:21:18 +02:00
Michael Paquier e152506ade Revert pg_relation_check_pages()
This reverts the following set of commits, following complaints about
the lack of portability of the central part of the code in bufmgr.c as
well as the use of partition mapping locks during page reads:
c780a7a9
f2b88396
b787d4ce
ce7f772c
60a51c6b

Per discussion with Andres Freund, Robert Haas and myself.

Bump catalog version.

Discussion: https://postgr.es/m/20201029181729.2nrub47u7yqncsv7@alap3.anarazel.de
2020-11-04 10:21:46 +09:00
Thomas Munro 257836a755 Track collation versions for indexes.
Record the current version of dependent collations in pg_depend when
creating or rebuilding an index.  When accessing the index later, warn
that the index may be corrupted if the current version doesn't match.

Thanks to Douglas Doole, Peter Eisentraut, Christoph Berg, Laurenz Albe,
Michael Paquier, Robert Haas, Tom Lane and others for very helpful
discussion.

Author: Thomas Munro <thomas.munro@gmail.com>
Author: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> (earlier versions)
Discussion: https://postgr.es/m/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com
2020-11-03 01:19:50 +13:00
Michael Paquier f2b8839695 Add pg_relation_check_pages() to check on-disk pages of a relation
This makes use of CheckBuffer() introduced in c780a7a, adding a SQL
wrapper able to do checks for all the pages of a relation.  By default,
all the fork types of a relation are checked, and it is possible to
check only a given relation fork.  Note that if the relation given in
input has no physical storage or is temporary, then no errors are
generated, allowing full-database checks when coupled with a simple scan
of pg_class for example.  This is not limited to clusters with data
checksums enabled, as clusters without data checksums can still apply
checks on pages using the page headers or for the case of a page full of
zeros.

This function returns a set of tuples consisting of:
- The physical file where a broken page has been detected (without the
segment number as that can be AM-dependent, which can be guessed from
the block number for heap).  A relative path from PGPATH is used.
- The block number of the broken page.

By default, only superusers have an access to this function but
execution rights can be granted to other users.

The feature introduced here is still minimal, and more improvements
could be done, like:
- Addition of a start and end block number to run checks on a range
of blocks, which would apply only if one fork type is checked.
- Addition of some progress reporting.
- Throttling, with configuration parameters in function input or
potentially some cost-based GUCs.

Regression tests are added for positive cases in the main regression
test suite, and TAP tests are added for cases involving the emulation of
page corruptions.

Bump catalog version.

Author: Julien Rouhaud, Michael Paquier
Reviewed-by: Masahiko Sawada, Justin Pryzby
Discussion: https://postgr.es/m/CAOBaU_aVvMjQn=ge5qPiJOPMmOj5=ii3st5Q0Y+WuLML5sR17w@mail.gmail.com
2020-10-28 12:15:00 +09:00
Michael Paquier 783f0cc64d Improve performance of Unicode {de,re}composition in the backend
This replaces the existing binary search with two perfect hash functions
for the composition and the decomposition in the backend code, at the
cost of slightly-larger binaries there (35kB in libpgcommon_srv.a).  Per
the measurements done, this improves the speed of the recomposition and
decomposition by up to 30~40 times for the NFC and NFKC conversions,
while all other operations get at least 40% faster.  This is not as
"good" as what libicu has, but it closes the gap a lot as per the
feedback from Daniel Verite.

The decomposition table remains the same, getting used for the binary
search in the frontend code, where we care more about the size of the
libraries like libpq over performance as this gets involved only in code
paths related to the SCRAM authentication.  In consequence, note that
the perfect hash function for the recomposition needs to use a new
inverse lookup array back to to the existing decomposition table.

The size of all frontend deliverables remains unchanged, even with
--enable-debug, including libpq.

Author: John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/CAFBsxsHUuMFCt6-pU+oG-F1==CmEp8wR+O+bRouXWu6i8kXuqA@mail.gmail.com
2020-10-23 11:05:46 +09:00
Robert Haas 866e24d47d Extend amcheck to check heap pages.
Mark Dilger, reviewed by Peter Geoghegan, Andres Freund, Álvaro Herrera,
Michael Paquier, Amul Sul, and by me. Some last-minute cosmetic
revisions by me.

Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com
2020-10-22 08:44:18 -04:00
Michael Paquier 19ae53c92d Review format of code generated by PerfectHash.pm
80f8eb7 has added to the normalization quick check headers some code
generated by PerfectHash.pm that is incompatible with the settings of
gitattributes for this repository, as whitespaces followed a set of tabs
for the first element of a line in the table.  Instead of adding a new
exception to gitattributes, rework the format generated so as a right
padding with spaces is used instead of a left padding.  This keeps the
table generated in a readable shape with its set of columns, making
unnecessary an update of gitattributes.

Reported-by: Peter Eisentraut
Author: John Naylor
Discussion: https://postgr.es/m/d601b3b5-a3c7-5457-2f84-3d6513d690fc@2ndquadrant.com
2020-10-21 09:22:27 +09:00
Tom Lane ce0e97f808 Sync our copy of the timezone library with IANA release tzcode2020c.
This changes zic's default output format from "-b fat" to "-b slim".
We were already using "slim" in v13/HEAD, so those branches drop
the explicit -b switch in the Makefiles.  Instead, add an explicit
"-b fat" in v12 and before, so that we don't change the output file
format in those branches.  (This is perhaps excessively conservative,
but we decided not to do so in a12079109, and I'll stick with that.)

Other non-cosmetic changes are to drop support for zic's long-obsolete
"-y" switch, and to ensure that strftime() does not change errno
unless it fails.

As usual with tzcode changes, back-patch to all supported branches.
2020-10-16 21:40:16 -04:00
Peter Eisentraut 39b4a95100 Use https for gnu.org links
Mostly already done, but there were some stragglers.
2020-10-14 08:24:54 +02:00
Michael Paquier 80f8eb79e2 Use perfect hash for NFC and NFKC Unicode Normalization quick check
This makes the normalization quick check about 30% faster for NFC and
50% faster for NFKC than the binary search used previously.  The hash
lookup reuses the existing array of bit fields used for the binary
search to get the quick check property and is generated as part of "make
update-unicode" in src/common/unicode/.

Author: John Naylor
Reviewed-by: Mark Dilger, Michael Paquier
Discussion: https://postgr.es/m/CACPNZCt4fbJ0_bGrN5QPt34N4whv=mszM0LMVQdoa2rC9UMRXA@mail.gmail.com
2020-10-11 19:09:01 +09:00
Tom Lane bed90759fc Fix our Windows stat() emulation to handle file sizes > 4GB.
Hack things so that our idea of "struct stat" is equivalent to Windows'
struct __stat64, allowing it to have a wide enough st_size field.

Instead of relying on native stat(), use GetFileInformationByHandle().
This avoids a number of issues with Microsoft's multiple and rather
slipshod emulations of stat().  We still need to jump through hoops
to deal with ERROR_DELETE_PENDING, though :-(

Pull the relevant support code out of dirmod.c and put it into
its own file, win32stat.c.

Still TODO: do we need to do something different with lstat(),
rather than treating it identically to stat()?

Juan José Santamaría Flecha, reviewed by Emil Iggland;
based on prior work by Michael Paquier, Sergey Zubkovsky, and others

Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905CF5099@g01jpexmbkw24
Discussion: https://postgr.es/m/15858-9572469fd3b73263@postgresql.org
2020-10-09 16:20:12 -04:00
Michael Paquier 2a73164581 Improve set of candidate multipliers for perfect hash function generation
The previous set of multipliers was not adapted for large sets of short
keys, and this new set of multipliers allows to generate perfect hash
functions for larger sets without having an impact for existing callers
of those functions, as experimentation has showed.  A future commit will
make use of that to improve the performance of unicode normalization.

All multipliers compile to shift-and-add instructions on most platforms.
This has been tested as far back as gcc 4.1 and clang 3.8.

Author: John Naylor
Reviewed-by: Mark Dilger, Michael Paquier
Discussion: https://postgr.es/m/CACPNZCt4fbJ0_bGrN5QPt34N4whv=mszM0LMVQdoa2rC9UMRXA@mail.gmail.com
2020-10-08 13:16:43 +09:00
Amit Kapila 9868167500 Track statistics for spilling of changes from ReorderBuffer.
This adds the statistics about transactions spilled to disk from
ReorderBuffer. Users can query the pg_stat_replication_slots view to check
these stats and call pg_stat_reset_replication_slot to reset the stats of
a particular slot. Users can pass NULL in pg_stat_reset_replication_slot
to reset stats of all the slots.

This commit extends the statistics collector to track this information
about slots.

Author: Sawada Masahiko and Amit Kapila
Reviewed-by: Amit Kapila and Dilip Kumar
Discussion: https://postgr.es/m/CA+fd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg@mail.gmail.com
2020-10-08 09:09:08 +05:30
Fujii Masao 8d9a935965 Add pg_stat_wal statistics view.
This view shows the statistics about WAL activity. Currently it has only
two columns: wal_buffers_full and stats_reset. wal_buffers_full column
indicates the number of times WAL data was written to the disk because
WAL buffers got full. This information is useful when tuning wal_buffers.
stats_reset column indicates the time at which these statistics were
last reset.

pg_stat_wal view is also the basic infrastructure to expose other
various statistics about WAL activity later.

Bump PGSTAT_FILE_FORMAT_ID due to the change in pgstat format.

Bump catalog version.

Author: Masahiro Ikeda
Reviewed-by: Takayuki Tsunakawa, Kyotaro Horiguchi, Amit Kapila, Fujii Masao
Discussion: https://postgr.es/m/188bd3f2d2233cf97753b5ced02bb050@oss.nttdata.com
2020-10-02 10:17:11 +09:00
Thomas Munro dee663f784 Defer flushing of SLRU files.
Previously, we called fsync() after writing out individual pg_xact,
pg_multixact and pg_commit_ts pages due to cache pressure, leading to
regular I/O stalls in user backends and recovery.  Collapse requests for
the same file into a single system call as part of the next checkpoint,
as we already did for relation files, using the infrastructure developed
by commit 3eb77eba.  This can cause a significant improvement to
recovery performance, especially when it's otherwise CPU-bound.

Hoist ProcessSyncRequests() up into CheckPointGuts() to make it clearer
that it applies to all the SLRU mini-buffer-pools as well as the main
buffer pool.  Rearrange things so that data collected in CheckpointStats
includes SLRU activity.

Also remove the Shutdown{CLOG,CommitTS,SUBTRANS,MultiXact}() functions,
because they were redundant after the shutdown checkpoint that
immediately precedes them.  (I'm not sure if they were ever needed, but
they aren't now.)

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (parts)
Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com>
Discussion: https://postgr.es/m/CA+hUKGLJ=84YT+NvhkEEDAuUtVHMfQ9i-N7k_o50JmQ6Rpj_OQ@mail.gmail.com
2020-09-25 19:00:15 +12:00
Tom Lane c4133ec169 Exclude fmgrprotos.h from pgindent processing.
pgindent messes up entries in this file if their names match
typedef names.  While there's reason to avoid choosing conflicting
names, we have some historical exceptions, and there's no guarantee
that more duplicates won't appear in future.  Since this is a derived
file anyway, there's little harm in just excluding it.

I said yesterday that all our derived files are pgindent-clean, or else
explicitly excluded from indentation, but I'd forgotten about this one.
Now that project is really done, as confirmed by a test run.

Discussion: https://postgr.es/m/79ed5348-be7a-b647-dd40-742207186a22@2ndquadrant.com
2020-09-22 11:32:10 -04:00
Tom Lane 74d4608f50 Further improve pgindent's list of file exclusions.
I despair of people keeping the README file's notes in sync with
the actual exclusion list, so move the notes into the exclusion file.
Adjust the pgindent script to explicitly ignore comments in the file,
just in case (though it's hard to believe any would match filenames).

Extend the list so that it doesn't process any files we'd not wish it to
even in a fully-built-out development directory.  (There are still a
couple of derived files that it wants to reformat, but fixing the
generator scripts for those seems like fit material for a separate patch.)

Discussion: https://postgr.es/m/79ed5348-be7a-b647-dd40-742207186a22@2ndquadrant.com
2020-09-17 14:16:18 -04:00
Tom Lane add105840b Improve formatting of create_help.pl and plperl_opmask.pl output.
Adjust the whitespace in the emitted files so that it matches
what pgindent would do.  This makes the generated files look
like they match project style, and avoids confusion if someone
does run pgindent on the generated files.

Also, add probes.h to pgindent's exclusion list, because it can
confuse pgindent, plus there's not much point in processing it.

Daniel Gustafsson, additional fixes by me

Discussion: https://postgr.es/m/79ed5348-be7a-b647-dd40-742207186a22@2ndquadrant.com
2020-09-16 20:31:37 -04:00
Robert Haas 34a947ca13 New contrib module, pg_surgery, with heap surgery functions.
Sometimes it happens that the visibility information for a tuple
becomes corrupted, either due to bugs in the database software or
external factors. Provide a function heap_force_kill() that can
be used to truncate such dead tuples to dead line pointers, and
a function heap_force_freeze() that can be used to overwrite the
visibility information in such a way that the tuple becomes
all-visible.

These functions are unsafe, in that you can easily use them to
corrupt a database that was not previously corrupted, and you can
use them to further corrupt an already-corrupted database or to
destroy data. The documentation accordingly cautions against
casual use. However, in some cases they permit recovery of data
that would otherwise be very difficult to recover, or to allow a
system to continue to function when it would otherwise be difficult
to do so.

Because we may want to add other functions for performing other
kinds of surgery in the future, the new contrib module is called
pg_surgery rather than something specific to these functions. I
proposed back-patching this so that it could be more easily used
by people running existing releases who are facing these kinds of
problems, but that proposal did not attract enough support, so
no back-patch for now.

Ashutosh Sharma, reviewed and tested by Andrey M. Borodin,
M. Beena Emerson, Masahiko Sawada, Rajkumar Raghuwanshi,
Asim Praveen, and Mark Dilger, and somewhat revised by me.

Discussion: http://postgr.es/m/CA+TgmoZW1fsU-QUNCRUQMGUygBDPVeOTLCqRdQZch=EYZnctSA@mail.gmail.com
2020-09-10 11:14:07 -04:00
Michael Paquier a6642b3ae0 Add support for partitioned tables and indexes in REINDEX
Until now, REINDEX was not able to work with partitioned tables and
indexes, forcing users to reindex partitions one by one.  This extends
REINDEX INDEX and REINDEX TABLE so as they can accept a partitioned
index and table in input, respectively, to reindex all the partitions
assigned to them with physical storage (foreign tables, partitioned
tables and indexes are then discarded).

This shares some logic with schema and database REINDEX as each
partition gets processed in its own transaction after building a list of
relations to work on.  This choice has the advantage to minimize the
number of invalid indexes to one partition with REINDEX CONCURRENTLY in
the event a cancellation or failure in-flight, as the only indexes
handled at once in a single REINDEX CONCURRENTLY loop are the ones from
the partition being working on.

Isolation tests are added to emulate some cases I bumped into while
developing this feature, particularly with the concurrent drop of a
leaf partition reindexed.  However, this is rather limited as LOCK would
cause REINDEX to block in the first transaction building the list of
partitions.

Per its multi-transaction nature, this new flavor cannot run in a
transaction block, similarly to REINDEX SCHEMA, SYSTEM and DATABASE.

Author: Justin Pryzby, Michael Paquier
Reviewed-by: Anastasia Lubennikova
Discussion: https://postgr.es/m/db12e897-73ff-467e-94cb-4af03705435f.adger.lj@alibaba-inc.com
2020-09-08 10:09:22 +09:00
Thomas Munro 861c6e7c8e Skip unnecessary stat() calls in walkdir().
Some kernels can tell us the type of a "dirent", so we can avoid a call
to stat() or lstat() in many cases.  Define a new function
get_dirent_type() to contain that logic, for use by the backend and
frontend versions of walkdir(), and perhaps other callers in future.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Juan José Santamaría Flecha <juanjo.santamaria@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BFzxupGGN4GpUdbzZN%2Btn6FQPHo8w0Q%2BAPH5Wz8RG%2Bww%40mail.gmail.com
2020-09-07 18:28:06 +12:00
Tom Lane 67a472d71c Remove arbitrary restrictions on password length.
This patch started out with the goal of harmonizing various arbitrary
limits on password length, but after awhile a better idea emerged:
let's just get rid of those fixed limits.

recv_password_packet() has an arbitrary limit on the packet size,
which we don't really need, so just drop it.  (Note that this doesn't
really affect anything for MD5 or SCRAM password verification, since
those will hash the user's password to something shorter anyway.
It does matter for auth methods that require a cleartext password.)

Likewise remove the arbitrary error condition in pg_saslprep().

The remaining limits are mostly in client-side code that prompts
for passwords.  To improve those, refactor simple_prompt() so that
it allocates its own result buffer that can be made as big as
necessary.  Actually, it proves best to make a separate routine
pg_get_line() that has essentially the semantics of fgets(), except
that it allocates a suitable result buffer and hence will never
return a truncated line.  (pg_get_line has a lot of potential
applications to replace randomly-sized fgets buffers elsewhere,
but I'll leave that for another patch.)

I built pg_get_line() atop stringinfo.c, which requires moving
that code to src/common/; but that seems fine since it was a poor
fit for src/port/ anyway.

This patch is mostly mine, but it owes a good deal to Nathan Bossart
who pressed for a solution to the password length problem and
created a predecessor patch.  Also thanks to Peter Eisentraut and
Stephen Frost for ideas and discussion.

Discussion: https://postgr.es/m/09512C4F-8CB9-4021-B455-EF4C4F0D55A0@amazon.com
2020-09-03 20:09:18 -04:00
Amit Kapila 464824323e Add support for streaming to built-in logical replication.
To add support for streaming of in-progress transactions into the
built-in logical replication, we need to do three things:

* Extend the logical replication protocol, so identify in-progress
transactions, and allow adding additional bits of information (e.g.
XID of subtransactions).

* Modify the output plugin (pgoutput) to implement the new stream
API callbacks, by leveraging the extended replication protocol.

* Modify the replication apply worker, to properly handle streamed
in-progress transaction by spilling the data to disk and then
replaying them on commit.

We however must explicitly disable streaming replication during
replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover we don't have a replication connection open so we
don't have where to send the data anyway.

Author: Tomas Vondra, Dilip Kumar and Amit Kapila
Reviewed-by: Amit Kapila, Kuntal Ghosh and Ajin Cherian
Tested-by: Neha Sharma, Mahendra Singh Thalor and Ajin Cherian
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
2020-09-03 07:54:07 +05:30
Andres Freund 73487a60fc snapshot scalability: Move subxact info to ProcGlobal, remove PGXACT.
Similar to the previous changes this increases the chance that data
frequently needed by GetSnapshotData() stays in l2 cache. In many
workloads subtransactions are very rare, and this makes the check for
that considerably cheaper.

As this removes the last member of PGXACT, there is no need to keep it
around anymore.

On a larger 2 socket machine this and the two preceding commits result
in a ~1.07x performance increase in read-only pgbench. For read-heavy
mixed r/w workloads without row level contention, I see about 1.1x.

Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Reviewed-By: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-14 15:33:35 -07:00
Andres Freund dc7420c2c9 snapshot scalability: Don't compute global horizons while building snapshots.
To make GetSnapshotData() more scalable, it cannot not look at at each proc's
xmin: While snapshot contents do not need to change whenever a read-only
transaction commits or a snapshot is released, a proc's xmin is modified in
those cases. The frequency of xmin modifications leads to, particularly on
higher core count systems, many cache misses inside GetSnapshotData(), despite
the data underlying a snapshot not changing. That is the most
significant source of GetSnapshotData() scaling poorly on larger systems.

Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons /
thresholds as it has so far. But we don't really have to: The horizons don't
actually change that much between GetSnapshotData() calls. Nor are the horizons
actually used every time a snapshot is built.

The trick this commit introduces is to delay computation of accurate horizons
until there use and using horizon boundaries to determine whether accurate
horizons need to be computed.

The use of RecentGlobal[Data]Xmin to decide whether a row version could be
removed has been replaces with new GlobalVisTest* functions.  These use two
thresholds to determine whether a row can be pruned:
1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed
   are definitely still visible.
2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can
   definitely be removed
GetSnapshotData() updates definitely_needed to be the xmin of the computed
snapshot.

When testing whether a row can be removed (with GlobalVisTestIsRemovableXid())
and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID <
definitely_needed) the boundaries can be recomputed to be more accurate. As it
is not cheap to compute accurate boundaries, we limit the number of times that
happens in short succession.  As the boundaries used by
GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by
GetSnapshotData()), it is likely that further test can benefit from an earlier
computation of accurate horizons.

To avoid regressing performance when old_snapshot_threshold is set (as that
requires an accurate horizon to be computed), heap_page_prune_opt() doesn't
unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the
computation of the limited horizon, and the triggering of errors (with
SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove
tuples.

This commit just removes the accesses to PGXACT->xmin from
GetSnapshotData(), but other members of PGXACT residing in the same
cache line are accessed. Therefore this in itself does not result in a
significant improvement. Subsequent commits will take advantage of the
fact that GetSnapshotData() now does not need to access xmins anymore.

Note: This contains a workaround in heap_page_prune_opt() to keep the
snapshot_too_old tests working. While that workaround is ugly, the tests
currently are not meaningful, and it seems best to address them separately.

Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Reviewed-By: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-12 16:03:49 -07:00
Noah Misch e078fb5d4e Move connect.h from fe_utils to src/include/common.
Any libpq client can use the header.  Clients include backend components
postgres_fdw, dblink, and logical replication apply worker.  Back-patch
to v10, because another fix needs this.  In released branches, just copy
the header and keep the original.
2020-08-10 09:22:54 -07:00
Peter Eisentraut 25244b8972 Rename configure.in to configure.ac
The new name has been preferred by Autoconf for a long time.  Future
versions of Autoconf will warn about the old name.

Discussion: https://www.postgresql.org/message-id/flat/e796c185-5ece-8569-248f-dd3799701be1%402ndquadrant.com
2020-07-24 10:42:08 +02:00
Peter Eisentraut 4d3db13621 Define OPENSSL_API_COMPAT
This avoids deprecation warnings from newer OpenSSL versions (3.0.0 in
particular).

Discussion: https://www.postgresql.org/message-id/flat/FEF81714-D479-4512-839B-C769D2605F8A%40yesql.se
2020-07-19 12:14:42 +02:00
Michael Paquier 61be85afab Revert "Remove reset of testtablespace from pg_regress on Windows"
This reverts commit 2b2a070, that moved the reset of path
"testtablespace" used by the regression tests as a path for tablespaces
(via --outputdir) from pg_regress to the MSVC script vcregress.pl, as
this broke the behavior added by ce5d342 to be able to safely run the
regression test suite with an administrative Windows account using a
restricted token.

Note that before 2b2a070, the code doing the reset in pg_regress.c
included a comment telling that we had better move that out to a
different place, leading to the mistake done in 2b2a070.  Fix this
comment, and document instead that we had better never remove this code,
for the sake of not breaking again the behavior we expect on Windows.

Thanks to Thomas Munro and Andrew Dunstan for the discussion.

Discussion: https://postgr.es/m/6d9eee97-54c8-e14a-48f7-3194e712f54f@2ndQuadrant.com
Discussion: https://postgr.es/m/CA+hUKGLiieEzfrdWxWFE+_wnXho_F5Smx972X1wEubhS7v1q9g@mail.gmail.com
2020-07-10 17:08:13 +09:00
Michael Paquier 404b912c5c Improve perl script in MSVC to build binaries
This commit includes two improvements for build.pl in src/tools/msvc/:
- Fix two warnings related to $ARGV[0] being uninitialized, something
that happens when calling build.pl (or build.bat) without arguments to
compile all the components with a release quality.
- If calling the script with more than two arguments, exit immediately
and show a new help output.  build.pl was not failing in this case
before this commit, and the extra arguments were just ignored, but the
new behavior helps in understanding how to run the script without
looking at the documentation for the Windows builds.

Reported-by: Ranier Vilela
Author: Michael Paquier
Reviewed-by: Juan José Santamaría Flecha, David Zhang
Discussion: https://postgr.es/m/CAEudQAo38dfR_0Ugt2OHy4mq-6Hz93XPSBAGEUV67RhKdgp8Zg@mail.gmail.com
2020-07-06 09:16:17 +09:00
Amit Kapila a69e041d0c Improve vacuum error context handling.
Use separate functions to save and restore error context information as
that made code easier to understand.  Also, make it clear that the index
information required for error context is sane.

Author: Andres Freund, Justin Pryzby, Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/CAA4eK1LWo+v1OWu=Sky27GTGSCuOmr7iaURNbc5xz6jO+SaPeA@mail.gmail.com
2020-07-01 07:58:36 +05:30
Tom Lane ea57e531b9 Remove support for timezone "posixrules" file.
The IANA tzcode library has a feature to read a time zone file named
"posixrules" and apply the daylight-savings transition dates and times
therein, when it is given a POSIX-style time zone specification that
lacks an explicit transition rule.  However, there's a problem with
that code: it doesn't work for dates past the Y2038 time_t rollover.
(Effectively, all times beyond that point are treated as standard
time.)  The IANA crew regard this feature as legacy, so their plan is
to remove it not fix it.  The time frame in which that will happen
is unclear, but presumably it'll happen well before 2038.

Moreover, effective with the next IANA data update (probably this
fall), the recommended default will be to not install a "posixrules"
file in the first place.  The time frame in which tzdata packagers
might adopt that suggestion is likewise unclear, but at least some
platforms will probably do it in the next year or so.  While we could
ignore that recommendation so far as PG-supplied tzdata trees are
concerned, builds using --with-system-tzdata will be subject to
whatever the platform's tzdata packager decides to do.

Thus, whether or not we do anything, some increasing fraction of
Postgres users will be exposed to the behavior observed when there
is no "posixrules" file; and if we do nothing, we'll have essentially
no control over the timing of that change.

The best thing to do to ameliorate the uncertainty seems to be to
proactively remove the posixrules-reading feature.  If we do that in
a scheduled release then at least we can release-note the behavioral
change, rather than having users be surprised by it after a routine
tzdata update.

The change in question is fairly minor anyway: to be affected,
you have to be using a POSIX-style timezone spec, it has to not
have an explicit rule, and it has to not be one of the four traditional
continental-USA zone names (EST5EDT, CST6CDT, MST7MDT, or PST8PDT),
as those are special-cased.  Since the default "posixrules" file
provides USA DST rules, the number of people who are likely to find
such a zone spec useful is probably quite small.  Moreover, the
fallback behavior with no explicit rule and no "posixrules" file is to
apply current USA rules, so the only thing that really breaks is the
DST transitions in years before 2007 (and you get the countervailing
fix that transitions after 2038 will be applied).

Now, some installations might have replaced the "posixrules" file,
allowing e.g. EU rules to be applied to a POSIX-style timezone spec.
That won't work anymore.  But it's not exactly clear why this solution
would be preferable to using a regular named zone.  In any case, given
the Y2038 issue, we need to be pushing users to stop depending on this.

Back-patch into v13; it hasn't been released yet, so it seems OK to
change its behavior.  (Personally I think we ought to back-patch
further, but I've been outvoted.)

Discussion: https://postgr.es/m/1390.1562258309@sss.pgh.pa.us
Discussion: https://postgr.es/m/20200621211855.6211-1-eggert@cs.ucla.edu
2020-06-29 18:55:01 -04:00
Michael Paquier 2b2a070d98 Remove reset of testtablespace from pg_regress on Windows
testtablespace is an extra path used as tablespace location in the main
regression test suite, computed from --outputdir as defined by the
caller of pg_regress (current directory if undefined).

This special handling was introduced as of f10589e to be specific to
MSVC, as we let pg_regress' Makefile handle this cleanup in other
environments.  This moves the cleanup to the MSVC script running
regression tests instead where needed: check, installcheck and
upgradecheck.  I have also checked this patch on MSVC with repeated runs
of each target.

Author: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20200219.142519.437573253063431435.horikyota.ntt@gmail.com
2020-06-18 10:40:10 +09:00
Michael Paquier e78900afd2 Create by default sql/ and expected/ for output directory in pg_regress
Using --outputdir with a custom output repository has never created by
default the sql/ and expected/ paths generated with contents from
respectively input/ and output/ if they don't exist, while the base
output directory gets created if it does not exist.  If sql/ and
expected/ are not present, pg_regress would fail with the path missing,
requiring test scripts to create those extra paths by themselves.  This
commit changes pg_regress so as both get created by default if they do
not exist, removing the need for external test scripts to do so.

This cleans up two code paths in the tree for pg_upgrade tests in MSVC
and environments able to use test.sh.  sql/ and expected/ were created
as part of each test script, but this is not needed anymore as
pg_regress handles the work now.

Author: Roman Zharkov, Daniel Gustafsson
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/16484-4d89e9cc11241996@postgresql.org
2020-06-13 14:04:56 +09:00
Michael Paquier a3b2bf1fe7 Move frontend-side archive APIs from src/common/ to src/fe_utils/
fe_archive.c was compiled only for the frontend in src/common/, but as
it will never share anything with the backend, it makes most sense to
move this file to src/fe_utils/.

Reported-by: Peter Eisentraut
Discussion: https://postgr.es/m/e9766d71-8655-ac86-bdf6-77e0e7169977@2ndquadrant.com
Backpatch-through: 13
2020-06-11 15:48:46 +09:00
Andres Freund 47c718792b Avoid need for valgrind suppressions for pg_atomic_init_u64 on some platforms.
Previously we used pg_atomic_write_64_impl inside
pg_atomic_init_u64. That works correctly, but on platforms without
64bit single copy atomicity it could trigger spurious valgrind errors
about uninitialized memory, because we use compare_and_swap for atomic
writes on such platforms.

I previously suppressed one instance of this problem (6c878edc1d),
but as Tom reports that wasn't enough. As the atomic variable cannot
yet be concurrently accessible during initialization, it seems better
to have pg_atomic_init_64_impl set the value directly.

Change pg_atomic_init_u32_impl for symmetry.

Reported-By: Tom Lane
Author: Andres Freund
Discussion: https://postgr.es/m/1714601.1591503815@sss.pgh.pa.us
Backpatch: 9.5-
2020-06-08 19:52:19 -07:00
Noah Misch 5a2398b0b2 MSVC: Avoid warning when testing a TAP suite without PROVE_FLAGS.
Commit 7be5d8df1f surfaced the logic
error, which had no functional implications, by adding "use warnings".
The buildfarm always customizes PROVE_FLAGS, so the warning did not
appear there.  Back-patch to 9.5 (all supported versions).
2020-06-07 16:27:13 -07:00
Tom Lane d10b19e224 Stamp HEAD as 14devel.
Let the hacking begin ...
2020-06-07 17:16:30 -04:00
Tom Lane b5d69b7c22 pgindent run prior to branching v13.
pgperltidy and reformat-dat-files too, though those didn't
find anything to change.
2020-06-07 16:57:08 -04:00
Noah Misch 26056b3ba8 Refresh function name in CRC-associated Valgrind suppressions.
Back-patch to 9.5, where commit 4f700bcd20
first appeared.

Reviewed by Tom Lane.  Reported by Andrew Dunstan.

Discussion: https://postgr.es/m/4dfabec2-a3ad-0546-2d62-f816c97edd0c@2ndQuadrant.com
2020-06-05 20:10:53 -07:00
Michael Paquier d2a9959907 Fix MSVC installations with multiple "configure" files detected
When installing binaries and libraries using the MSVC installation
routines, the operation gets done after moving to the root folder, whose
location is detected by checking if "configure" exists two times in a
row.  So, calling the installation script from src/tools/msvc/ with an
extra "configure" file four levels up the root path of the code tree
causes the execution to go further up, leading to a failure in finding
the builds.  This commit fixes the issue by moving to the root folder of
the code tree only once, when necessary.

Author: Arnold Müller
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/16343-f638f67e7e52b86c@postgresql.org
Backpatch-through: 9.5
2020-05-21 14:41:15 +09:00
Tom Lane fa27dd40d5 Run pgindent with new pg_bsd_indent version 2.1.1.
Thomas Munro fixed a longstanding annoyance in pg_bsd_indent, that
it would misformat lines containing IsA() macros on the assumption
that the IsA() call should be treated like a cast.  This improves
some other cases involving field/variable names that match typedefs,
too.  The only places that get worse are a couple of uses of the
OpenSSL macro STACK_OF(); we'll gladly take that trade-off.

Discussion: https://postgr.es/m/20200114221814.GA19630@alvherre.pgsql
2020-05-16 11:54:51 -04:00
Tom Lane e02ad575d8 Final pgindent run with pg_bsd_indent version 2.1.
This is just to provide a clean basis for comparison of the results
of the new version.  I did fix a typo that crept into 242dfcbaf.

Discussion: https://postgr.es/m/20200114221814.GA19630@alvherre.pgsql
2020-05-16 11:49:14 -04:00
Tom Lane 5cbfce562f Initial pgindent and pgperltidy run for v13.
Includes some manual cleanup of places that pgindent messed up,
most of which weren't per project style anyway.

Notably, it seems some people didn't absorb the style rules of
commit c9d297751, because there were a bunch of new occurrences
of function calls with a newline just after the left paren, all
with faulty expectations about how the rest of the call would get
indented.
2020-05-14 13:06:50 -04:00
Tom Lane 96d175e3e2 Fix findoidjoins to recognize oidvector columns.
Somehow we'd never noticed this oversight, even though it means
that such basic columns as pg_proc.proargtypes were not being
validated by the oidjoins test.  Correct the query and update
the test script with the newly-found dependencies.
2020-05-09 16:28:20 -04:00
Tom Lane 6c298881c2 Update oidjoins regression test for v13.
We seem to have forgotten to do this in the v12 cycle, so add it as
a task in the RELEASE_CHANGES list, in hopes we won't forget again.

While here, fix findoidjoins.c so that it actually works in the
new dispensation where OID is a regular column, and change it to only
consider system relations (this avoids being fooled by the OID column
in the brintest test table).

Also tweak the largeobject test so that the somewhat-recently-added
manual creation of a LO with an OID in the system range doesn't
fool findoidjoins.c.  For the moment I just made that use an unused
OID, but we might have to find a more robust solution someday.
2020-05-09 13:05:08 -04:00
Michael Paquier beb2516e96 Handle spaces for Python install location in MSVC scripts
Attempting to use an installation path of Python that includes spaces
caused the MSVC builds to fail.  This fixes the issue by using the same
quoting method as ad7595b for OpenSSL.

Author: Victor Wagner
Discussion: https://postgr.es/m/20200430150608.6dc6b8c4@antares.wagner.home
Backpatch-through: 9.5
2020-05-06 21:08:15 +09:00
Tom Lane 6c5f916168 Update Windows timezone name list to include currently-known zones.
Thanks to Juan José Santamaría Flecha.

Discussion: https://postgr.es/m/5752.1587740484@sss.pgh.pa.us
2020-04-24 17:53:23 -04:00
Tom Lane bd8c5cee96 Improve placement of "display name" comment in win32_tzmap[] entries.
Sticking this comment at the end of the last line was a bad idea: it's
not particularly readable, and it tempts pgindent to mess with line
breaks within the comment, which in turn reveals that win32tzlist.pl's
clean_displayname() does the wrong thing to clean up such line breaks.
While that's not hard to fix, there's basically no excuse for this
arrangement to begin with, especially since it makes the table layout
needlessly vary across back branches with different pgindent rules.
Let's just put the comment inside the braces, instead.

This commit just moves and reformats the comments, and updates
win32tzlist.pl to match; there's no actual data change.

Per odd-looking results from Juan José Santamaría Flecha.
Back-patch, since the point is to make win32_tzmap[] look the
same in all supported branches again.

Discussion: https://postgr.es/m/5752.1587740484@sss.pgh.pa.us
2020-04-24 17:21:44 -04:00
Bruce Momjian 395a9a1248 git_changelog: use modern format for rel branch names in example
e.g., REL_12_STABLE
2020-04-24 15:16:07 -04:00
Michael Paquier 4e87c4836a Fix handling of WAL segments ready to be archived during crash recovery
78ea8b5 has fixed an issue related to the recycling of WAL segments on
standbys depending on archive_mode.  However, it has introduced a
regression with the handling of WAL segments ready to be archived during
crash recovery, causing those files to be recycled without getting
archived.

This commit fixes the regression by tracking in shared memory if a live
cluster is either in crash recovery or archive recovery as the handling
of WAL segments ready to be archived is different in both cases (those
WAL segments should not be removed during crash recovery), and by using
this new shared memory state to decide if a segment can be recycled or
not.  Previously, it was not possible to know if a cluster was in crash
recovery or archive recovery as the shared state was able to track only
if recovery was happening or not, leading to the problem.

A set of TAP tests is added to close the gap here, making sure that WAL
segments ready to be archived are correctly handled when a cluster is in
archive or crash recovery with archive_mode set to "on" or "always", for
both standby and primary.

Reported-by: Benoît Lobréau
Author: Jehan-Guillaume de Rorthais
Reviewed-by: Kyotaro Horiguchi, Fujii Masao, Michael Paquier
Discussion: https://postgr.es/m/20200331172229.40ee00dc@firost
Backpatch-through: 9.5
2020-04-24 08:48:28 +09:00
Andrew Dunstan a9659fb654 Use a slightly more liberal regex to detect Visual Studio version
Apparently in some language versions of Visual Studio nmake outputs some
material after the version number and before the end of the line. This
has been seen in Chinese versions. Therefore, we no longer demand that
the version string comes at the end of a line.

Per complaint from Cuiping Lin.

Backpatch to all live branches.
2020-04-17 14:44:33 -04:00
Andrew Dunstan 0516f94d18 Stop requiring an explicit return from perl subroutines
The consensus of the project appears to be that this provides little
benefit and is simply an annoyance.

Discussion: https://postgr.es/m/27481.1586618092@sss.pgh.pa.us
2020-04-14 16:55:34 -04:00
Andrew Dunstan e60c6f6ea1 Set Perl search path more idiomatically
Back in commits 1df92eeafe, f884a96819, and 592123efbb I used some
hackish code to set the script search path, unaware despite decades of
perl that there was a completely standard way to do this. This patch
changes those cases to use the standard perl FindBin package.
2020-04-14 16:47:07 -04:00
Andrew Dunstan 8f00d84afc Use perl's $/ more idiomatically
This replaces a few occurrences of ugly code with a more clean and
idiomatic usage. The problem was highlighted by perlcritic, but we're
not enforcing the policy that led to the discovery.

Discussion: https://postgr.es/m/20200412074245.GB623763@rfd.leadboat.com
2020-04-13 12:06:11 -04:00
Andrew Dunstan 7be5d8df1f Use perl warnings pragma consistently
We've had a mixture of the warnings pragma, the -w switch on the shebang
line, and no warnings at all. This patch removes the -w swicth and add
the warnings pragma to all perl sources missing it. It raises the
severity of the TestingAndDebugging::RequireUseWarnings  perlcritic
policy to level 5, so that we catch any future violations.

Discussion: https://postgr.es/m/20200412074245.GB623763@rfd.leadboat.com
2020-04-13 11:55:45 -04:00
Andrew Dunstan 8930e43ecd Print policy name in perlcritic messages
This makes it easier to do a web search for details of the policy that's
been violated, as well as displaying the name that might be needed for a
policy override.

Various perlcritic settings changes are being discussed, but this one
should be uncontroversial.
2020-04-13 11:46:18 -04:00
Alexander Korotkov 1aac32df89 Revert 0f5ca02f53
0f5ca02f53 introduces 3 new keywords.  It appears to be too much for relatively
small feature.  Given now we past feature freeze, it's already late for
discussion of the new syntax.  So, revert.

Discussion: https://postgr.es/m/28209.1586294824%40sss.pgh.pa.us
2020-04-08 11:37:27 +03:00
Alexander Korotkov 0f5ca02f53 Implement waiting for given lsn at transaction start
This commit adds following optional clause to BEGIN and START TRANSACTION
commands.

  WAIT FOR LSN lsn [ TIMEOUT timeout ]

New clause pospones transaction start till given lsn is applied on standby.
This clause allows user be sure, that changes previously made on primary would
be visible on standby.

New shared memory struct is used to track awaited lsn per backend.  Recovery
process wakes up backend once required lsn is applied.

Author: Ivan Kartyshov, Anna Akenteva
Reviewed-by: Craig Ringer, Thomas Munro, Robert Haas, Kyotaro Horiguchi
Reviewed-by: Masahiko Sawada, Ants Aasma, Dmitry Ivanov, Simon Riggs
Reviewed-by: Amit Kapila, Alexander Korotkov
Discussion: https://postgr.es/m/0240c26c-9f84-30ea-fca9-93ab2df5f305%40postgrespro.ru
2020-04-07 23:51:10 +03:00
Thomas Munro 4c04be9b05 Introduce xid8-based functions to replace txid_XXX.
The txid_XXX family of fmgr functions exposes 64 bit transaction IDs to
users as int8.  Now that we have an SQL type xid8 for FullTransactionId,
define a new set of functions including pg_current_xact_id() and
pg_current_snapshot() based on that.  Keep the old functions around too,
for now.

It's a bit sneaky to use the same C functions for both, but since the
binary representation is identical except for the signedness of the
type, and since older functions are the ones using the wrong signedness,
and since we'll presumably drop the older ones after a reasonable period
of time, it seems reasonable to switch to FullTransactionId internally
and share the code for both.

Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Takao Fujii <btfujiitkp@oss.nttdata.com>
Reviewed-by: Yoshikazu Imai <imai.yoshikazu@fujitsu.com>
Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/20190725000636.666m5mad25wfbrri%40alap3.anarazel.de
2020-04-07 12:04:32 +12:00
Amit Kapila df3b181499 Add infrastructure to track WAL usage.
This allows gathering the WAL generation statistics for each statement
execution.  The three statistics that we collect are the number of WAL
records, the number of full page writes and the amount of WAL bytes
generated.

This helps the users who have write-intensive workload to see the impact
of I/O due to WAL.  This further enables us to see approximately what
percentage of overall WAL is due to full page writes.

In the future, we can extend this functionality to allow us to compute the
the exact amount of WAL data due to full page writes.

This patch in itself is just an infrastructure to compute WAL usage data.
The upcoming patches will expose this data via explain, auto_explain,
pg_stat_statements and verbose (auto)vacuum output.

Author: Kirill Bychik, Julien Rouhaud
Reviewed-by: Dilip Kumar, Fujii Masao and Amit Kapila
Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
2020-04-04 10:02:08 +05:30
Robert Haas c12e43a2e0 Add checksum helper functions.
These functions make it easier to write code that wants to compute a
checksum for some data while allowing the user to configure the type
of checksum that gets used.

This is another piece of infrastructure for the upcoming patch to add
backup manifests.

Patch written from scratch by me, but it is similar to previous work
by Rushabh Lathia and Suraj Kharage. Suraj also reviewed this version
off-list. Advice on how not to break Windows from Davinder Singh.

Discussion: http://postgr.es/m/CA+TgmoZV8dw1H2bzZ9xkKwdrk8+XYa+DC9H=F7heO2zna5T6qg@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmoZRTBiPyvQEwV79PU1ePTtSEo2UeVncrkJMbn1sU1gnRA@mail.gmail.com
2020-04-03 11:52:43 -04:00
Fujii Masao 17e0328224 Allow pg_stat_statements to track planning statistics.
This commit makes pg_stat_statements support new GUC
pg_stat_statements.track_planning. If this option is enabled,
pg_stat_statements tracks the planning statistics of the statements,
e.g., the number of times the statement was planned, the total time
spent planning the statement, etc. This feature is useful to check
the statements that it takes a long time to plan. Previously since
pg_stat_statements tracked only the execution statistics, we could
not use that for the purpose.

The planning and execution statistics are stored at the end of
each phase separately. So there are not always one-to-one relationship
between them. For example, if the statement is successfully planned
but fails in the execution phase, only its planning statistics are stored.
This may cause the users to be able to see different pg_stat_statements
results from the previous version. To avoid this,
pg_stat_statements.track_planning needs to be disabled.

This commit bumps the version of pg_stat_statements to 1.8
since it changes the definition of pg_stat_statements function.

Author: Julien Rouhaud, Pascal Legrand, Thomas Munro, Fujii Masao
Reviewed-by: Sergei Kornilov, Tomas Vondra, Yoshikazu Imai, Haribabu Kommi, Tom Lane
Discussion: https://postgr.es/m/CAHGQGwFx_=DO-Gu-MfPW3VQ4qC7TfVdH2zHmvZfrGv6fQ3D-Tw@mail.gmail.com
Discussion: https://postgr.es/m/CAEepm=0e59Y_6Q_YXYCTHZkqOc6H2pJ54C_Xe=VFu50Aqqp_sA@mail.gmail.com
Discussion: https://postgr.es/m/DB6PR0301MB21352F6210E3B11934B0DCC790B00@DB6PR0301MB2135.eurprd03.prod.outlook.com
2020-04-02 11:20:19 +09:00
Michael Paquier a7e8ece41c Add -c/--restore-target-wal to pg_rewind
pg_rewind needs to copy from the source cluster to the target cluster a
set of relation blocks changed from the previous checkpoint where WAL
forked up to the end of WAL on the target.  Building this list of
relation blocks requires a range of WAL segments that may not be present
anymore on the target's pg_wal, causing pg_rewind to fail.  It is
possible to work around this issue by copying manually the WAL segments
needed but this may lead to some extra and actually useless work.

This commit introduces a new option allowing pg_rewind to use a
restore_command while doing the rewind by grabbing the parameter value
of restore_command from the target cluster configuration.  This allows
the rewind operation to be more reliable, so as only the WAL segments
needed by the rewind are restored from the archives.

In order to be able to do that, a new routine is added to src/common/ to
allow frontend tools to restore files from archives using an
already-built restore command.  This version is more simple than the
backend equivalent as there is no need to handle the non-recovery case.

Author: Alexey Kondratov
Reviewed-by: Andrey Borodin, Andres Freund, Alvaro Herrera, Alexander
Korotkov, Michael Paquier
Discussion: https://postgr.es/m/a3acff50-5a0d-9a2c-b3b2-ee36168955c1@postgrespro.ru
2020-04-01 10:57:03 +09:00
Alexander Korotkov 911e702077 Implement operator class parameters
PostgreSQL provides set of template index access methods, where opclasses have
much freedom in the semantics of indexing.  These index AMs are GiST, GIN,
SP-GiST and BRIN.  There opclasses define representation of keys, operations on
them and supported search strategies.  So, it's natural that opclasses may be
faced some tradeoffs, which require user-side decision.  This commit implements
opclass parameters allowing users to set some values, which tell opclass how to
index the particular dataset.

This commit doesn't introduce new storage in system catalog.  Instead it uses
pg_attribute.attoptions, which is used for table column storage options but
unused for index attributes.

In order to evade changing signature of each opclass support function, we
implement unified way to pass options to opclass support functions.  Options
are set to fn_expr as the constant bytea expression.  It's possible due to the
fact that opclass support functions are executed outside of expressions, so
fn_expr is unused for them.

This commit comes with some examples of opclass options usage.  We parametrize
signature length in GiST.  That applies to multiple opclasses: tsvector_ops,
gist__intbig_ops, gist_ltree_ops, gist__ltree_ops, gist_trgm_ops and
gist_hstore_ops.  Also we parametrize maximum number of integer ranges for
gist__int_ops.  However, the main future usage of this feature is expected
to be json, where users would be able to specify which way to index particular
json parts.

Catversion is bumped.

Discussion: https://postgr.es/m/d22c3a18-31c7-1879-fc11-4c1ce2f5e5af%40postgrespro.ru
Author: Nikita Glukhov, revised by me
Reviwed-by: Nikolay Shaplov, Robert Haas, Tom Lane, Tomas Vondra, Alvaro Herrera
2020-03-30 19:17:23 +03:00
Amit Kapila b61d161c14 Introduce vacuum errcontext to display additional information.
The additional information displayed will be block number for error
occurring while processing heap and index name for error occurring
while processing the index.

This will help us in diagnosing the problems that occur during a vacuum.
For ex. due to corruption (either caused by bad hardware or by some bug)
if we get some error while vacuuming, it can help us identify the block
in heap and or additional index information.

It sets up an error context callback to display additional information
with the error.  During different phases of vacuum (heap scan, heap
vacuum, index vacuum, index clean up, heap truncate), we update the error
context callback to display appropriate information.  We can extend it to
a bit more granular level like adding the phases for FSM operations or for
prefetching the blocks while truncating. However, I felt that it requires
adding many more error callback function calls and can make the code a bit
complex, so left those for now.

Author: Justin Pryzby, with few changes by Amit Kapila
Reviewed-by: Alvaro Herrera, Amit Kapila, Andres Freund, Michael Paquier
and Sawada Masahiko
Discussion: https://www.postgresql.org/message-id/20191120210600.GC30362@telsasoft.com
2020-03-30 07:33:38 +05:30
Peter Eisentraut 8f3ec75de4 Enable Unix-domain sockets support on Windows
As of Windows 10 version 1803, Unix-domain sockets are supported on
Windows.  But it's not automatically detected by configure because it
looks for struct sockaddr_un and Windows doesn't define that.  So we
just make our own definition on Windows and override the configure
result.

Set DEFAULT_PGSOCKET_DIR to empty on Windows so by default no
Unix-domain socket is used, because there is no good standard
location.

In pg_upgrade, we have to do some extra tweaking to preserve the
existing behavior of not using Unix-domain sockets on Windows.  Adding
support would be desirable, but it needs further work, in particular a
way to select whether to use Unix-domain sockets from the command-line
or with a run-time test.

The pg_upgrade test script needs a fix.  The previous code passed
"localhost" to postgres -k, which only happened to work because
Windows used to ignore the -k argument value altogether.  We instead
need to pass an empty string to get the desired effect.

The test suites will continue to not use Unix-domain sockets on
Windows.  This requires a small tweak in pg_regress.c.  The TAP tests
don't need to be changed because they decide by the operating system
rather than HAVE_UNIX_SOCKETS.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/54bde68c-d134-4eb8-5bd3-8af33b72a010@2ndquadrant.com
2020-03-28 15:01:01 +01:00
Andrew Dunstan 896fcdb230 Provide a TLS init hook
The default hook function sets the default password callback function.
In order to allow preloaded libraries to have an opportunity to override
the default, TLS initialization if now delayed slightly until after
shared preloaded libraries have been loaded.

A test module is provided which contains a trivial example that decodes
an obfuscated password for an SSL certificate.

Author: Andrew Dunstan
Reviewed By: Andreas Karlsson, Asaba Takanori
Discussion: https://postgr.es/m/04116472-818b-5859-1d74-3d995aab2252@2ndQuadrant.com
2020-03-25 17:13:17 -04:00
Peter Eisentraut f5817595a7 Define EXEC_BACKEND in pg_config_manual.h
It was for unclear reasons defined in a separate location, which makes
it more cumbersome to override for testing, and it also did not have
any prominent documentation.  Move to pg_config_manual.h, where
similar things are already collected.

The previous definition on the command-line had the effect of defining
it to the value 1, but now that we don't need that anymore we just
define it to empty, to simplify manual editing a bit.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/b7053ba8-b008-5335-31de-2fe4fe41ef0f%402ndquadrant.com
2020-03-25 14:31:14 +01:00
Michael Paquier e09ad07b21 Move routine building restore_command to src/common/
restore_command has only been used until now by the backend, but there
is a pending patch for pg_rewind to make use of that in the frontend.

Author: Alexey Kondratov
Reviewed-by: Andrey Borodin, Andres Freund, Alvaro Herrera, Alexander
Korotkov, Michael Paquier
Discussion: https://postgr.es/m/a3acff50-5a0d-9a2c-b3b2-ee36168955c1@postgrespro.ru
2020-03-24 12:13:36 +09:00
Alexander Korotkov 0df94beb36 Fix ordering in typedefs.list 2020-03-24 00:59:17 +03:00
Tom Lane d0587f52b3 Fix up recent breakage of headerscheck and cpluspluscheck.
headerscheck and cpluspluscheck should skip the recently-added
cmdtaglist.h header, since (like kwlist.h and some other similarly-
designed headers) it's not meant to be included standalone.

evtcache.h was missing an #include to support its usage of Bitmapset.

typecmds.h was missing an #include to support its usage of ParseState.

The first two of these were evidently oversights in commit 2f9661311.
I didn't track down exactly which change broke typecmds.h, but it
must have been some rearrangement in one of its existing inclusions,
because it's referenced ParseState for quite a long time and there
were not complaints from these checking programs before.
2020-03-21 18:28:44 -04:00
Andrew Dunstan 71c2fd0c04 Turn off deprecated bison warnings under MSVC
These are disabled by the configure code, so this is just fixing an
inconsistency in the MSVC code.

Backpatch to all live branches.
2020-03-20 13:55:15 -04:00
Alexander Korotkov 45452825e5 Add new typedefs introduced in 773df883e8 to typedefs.list 2020-03-19 21:40:45 +03:00
Peter Eisentraut 0a42a2e9ce Remove win32ver.rc from version_stamp.pl
This removes another relic from the old nmake-based Windows build.
version_stamp.pl put version number information into win32ver.rc.  But
win32ver.rc already gets other version number information from the
preprocessor at build time, so it would make more sense if all version
number information would be handled in the same way and we don't have
two places that do it.

What we need for this is having the major version number and the minor
version number as separate integer symbols.  Both configure and
Solution.pm already have that logic, because they compute
PG_VERSION_NUM.  So we just keep all the logic there now.  Put the
minor version number into a new symbol PG_MINORVERSION_NUM.  Also, add
a symbol PG_MAJORVERSION_NUM, which is a number, alongside the
existing PG_MAJORVERSION, which is a string.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/1ee46ac4-a9b2-4531-bf54-5ec2e374634d@2ndquadrant.com
2020-03-10 11:21:41 +01:00
Tom Lane 36058a3c55 Create contrib/bool_plperl to provide a bool transform for PL/Perl[U].
plperl's default handling of bool arguments or results is not terribly
satisfactory, since Perl doesn't consider the string 'f' to be false.
Ideally we'd just fix that, but the backwards-compatibility hazard
would be substantial.  Instead, build a TRANSFORM module that can
be optionally applied to provide saner semantics.

Perhaps usefully, this is also about the minimum possible skeletal
example of a plperl transform module; so it might be a better starting
point for user-written transform modules than hstore_plperl or
jsonb_plperl.

Ivan Panchenko

Discussion: https://postgr.es/m/1583013317.881182688@f390.i.mail.ru
2020-03-06 17:11:23 -05:00
Peter Eisentraut 0ad6f848ee Move pg_upgrade's Windows link() implementation to AC_REPLACE_FUNCS
This way we can make use of it in other components as well, and it
fits better with the rest of the build system.

Discussion: https://www.postgresql.org/message-id/flat/72fff73f-dc9c-4ef4-83e8-d2e60c98df48%402ndquadrant.com
2020-03-04 08:22:54 +01:00
Tom Lane 7b425a5283 Blacklist port/win32_msvc/utime.h in cpluspluscheck and headerscheck.
Since commit 481c8e923 it tends to produce "error: sys/utime.h: No such
file or directory" on non-Windows platforms.
2020-03-02 14:35:22 -05:00
Peter Eisentraut 1933ae629e Add PostgreSQL home page to --help output
Per emerging standard in GNU programs and elsewhere.  Autoconf already
has support for specifying a home page, so we can just that.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/8d389c5f-7fb5-8e48-9a4a-68cec44786fa%402ndquadrant.com
2020-02-28 13:12:21 +01:00
Robert Haas 05d8449e73 Move src/backend/utils/hash/hashfn.c to src/common
This also involves renaming src/include/utils/hashutils.h, which
becomes src/include/common/hashfn.h. Perhaps an argument can be
made for keeping the hashutils.h name, but it seemed more
consistent to make it match the name of the file, and also more
descriptive of what is actually going on here.

Patch by me, reviewed by Suraj Kharage and Mark Dilger. Off-list
advice on how not to break the Windows build from Davinder Singh
and Amit Kapila.

Discussion: http://postgr.es/m/CA+TgmoaRiG4TXND8QuM6JXFRkM_1wL2ZNhzaUKsuec9-4yrkgw@mail.gmail.com
2020-02-27 09:25:41 +05:30
Michael Paquier 59f9cd9dd5 Fix build failure on header generation with repetitive builds of MSVC
GenerateConfigHeader() in Solution.pm was complaining about unused
define symbols even if a newer config header was not generated, causing
successive build attempts with MSVC to fail.

Oversight in commit 8f4fb4c.

Author: Kyotaro Horiguchi
Reviewed-by: Juan José Santamaría Flecha
Discussion: https://postgr.es/m/20200218.160500.44393633318853097.horikyota.ntt@gmail.com
2020-02-25 13:57:40 +09:00
Tom Lane ec4a7851d5 Adjust Solution.pm to set HAVE_STDINT_H.
We're not testing that anywhere anymore, but for consistency,
it should get defined.
2020-02-21 16:14:09 -05:00
Peter Eisentraut b24e125696 Fix perlcritic warnings 2020-02-21 22:03:05 +01:00
Peter Eisentraut 73c8596488 Allow running src/tools/msvc/mkvcbuild.pl under not Windows
This to allow verifying the MSVC build file generation without having
to have Windows.

To do this, we avoid Windows-specific Perl modules and don't run the
"cl" compiler or "nmake".  The resulting build files won't actually be
completely correct, but it's useful enough.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/d73b2c7b-f081-8357-8422-7564d55f1aac%402ndquadrant.com
2020-02-21 20:57:43 +01:00
Tom Lane f4d59369d2 Assume that we have signed integral types and flexible array members.
These compiler features are required by C99, so remove the configure
probes for them.

This is part of a series of commits to get rid of no-longer-relevant
configure checks and dead src/port/ code.  I'm committing them separately
to make it easier to back out individual changes if they prove less
portable than I expect.

Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
2020-02-21 14:30:48 -05:00
Tom Lane 97cf1fa4ed Assume that we have <wchar.h>.
Windows has this, and so do all other live platforms according to the
buildfarm; it's been required by POSIX since SUSv2.  So remove the
configure probe and tests of HAVE_WCHAR_H.

This is part of a series of commits to get rid of no-longer-relevant
configure checks and dead src/port/ code.  I'm committing them separately
to make it easier to back out individual changes if they prove less
portable than I expect.

Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
2020-02-21 14:30:47 -05:00
Tom Lane 481c8e9232 Assume that we have utime() and <utime.h>.
These are required by POSIX since SUSv2, and no live platforms fail
to provide them.  On Windows, utime() exists and we bring our own
<utime.h>, so we're good there too.  So remove the configure probes
and ad-hoc substitute code.  We don't need to check for utimes()
anymore either, since that was only used as a substitute.

In passing, make the Windows build include <sys/utime.h> only where
we need it, not everywhere.

This is part of a series of commits to get rid of no-longer-relevant
configure checks and dead src/port/ code.  I'm committing them separately
to make it easier to back out individual changes if they prove less
portable than I expect.

Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
2020-02-21 14:30:47 -05:00
Tom Lane f88a058200 Assume that we have rint().
Windows has this since _MSC_VER >= 1200, and so do all other live
platforms according to the buildfarm, so remove the configure probe
and src/port/ substitution.

This is part of a series of commits to get rid of no-longer-relevant
configure checks and dead src/port/ code.  I'm committing them separately
to make it easier to back out individual changes if they prove less
portable than I expect.

Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
2020-02-21 14:30:47 -05:00
Tom Lane 1200d71a09 Assume that we have memmove().
Windows has this, and so do all other live platforms according to the
buildfarm, so remove the configure probe and c.h's substitute code.

This is part of a series of commits to get rid of no-longer-relevant
configure checks and dead src/port/ code.  I'm committing them separately
to make it easier to back out individual changes if they prove less
portable than I expect.

Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
2020-02-21 14:30:47 -05:00
Tom Lane abe41f453a Assume that we have cbrt().
Windows has this, and so do all other live platforms according to the
buildfarm, so remove the configure probe and float.c's substitute code.

This is part of a series of commits to get rid of no-longer-relevant
configure checks and dead src/port/ code.  I'm committing them separately
to make it easier to back out individual changes if they prove less
portable than I expect.

Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
2020-02-21 14:30:47 -05:00
Tom Lane 7fde892bc1 Assume that we have isinf().
Windows has this, and so do all other live platforms according to the
buildfarm, so remove the configure probe and src/port/ substitution.

This also lets us get rid of some configure probes that existed only
to support src/port/isinf.c.  I kept the port.h hack to force using
__builtin_isinf() on clang, though.

This is part of a series of commits to get rid of no-longer-relevant
configure checks and dead src/port/ code.  I'm committing them separately
to make it easier to back out individual changes if they prove less
portable than I expect.

Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
2020-02-21 14:30:47 -05:00
Tom Lane 799d22461a Assume that we have functional, 64-bit fseeko()/ftello().
Windows has this, and so do all other live platforms according to the
buildfarm, so remove the configure probe and src/port/ substitution.

Keep the probe that detects whether _LARGEFILE_SOURCE has to be
defined to get that, though ... that seems to be still relevant in
some places.

This is part of a series of commits to get rid of no-longer-relevant
configure checks and dead src/port/ code.  I'm committing them separately
to make it easier to back out individual changes if they prove less
portable than I expect.

Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
2020-02-21 14:30:47 -05:00
Peter Eisentraut 957338418b Require stdint.h
stdint.h belongs to the compiler (as opposed to inttypes.h), so by
requiring a C99 compiler we can also require stdint.h
unconditionally.  Remove configure checks and other workarounds for
it.

This also removes a few steps in the required portability adjustments
to the imported time zone code, which can be applied on the next
import.

When using GCC on a platform that is otherwise pre-C99, this will now
require at least GCC 4.5, which is the first release that supplied a
standard-conforming stdint.h if the native platform didn't have it.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/5d398bbb-262a-5fed-d839-d0e5cff3c0d7%402ndquadrant.com
2020-02-21 09:20:32 +01:00
Peter Eisentraut b691c189c6 Simplify passing of configure arguments to pg_config
The previous system had configure put the value into the makefiles and
then have the makefiles pass them to the build of pg_config.  That was
put in place when pg_config was a shell script.  We can simplify that
by having configure put the value into pg_config.h directly.  This
also makes the standard build system match how the MSVC build system
already does it.

Discussion: https://www.postgresql.org/message-id/flat/6e457870-cef5-5f1d-b57c-fc89cfb8a788%402ndquadrant.com
2020-02-10 19:23:41 +01:00
Thomas Munro 815c2f0972 Add kqueue(2) support to the WaitEventSet API.
Use kevent(2) to wait for events on the BSD family of operating
systems and macOS.  This is similar to the epoll(2) support added
for Linux by commit 98a64d0bd.

Author: Thomas Munro
Reviewed-by: Andres Freund, Marko Tiikkaja, Tom Lane
Tested-by: Mateusz Guzik, Matteo Beccati, Keith Fiske, Heikki Linnakangas, Michael Paquier, Peter Eisentraut, Rui DeSousa, Tom Lane, Mark Wong
Discussion: https://postgr.es/m/CAEepm%3D37oF84-iXDTQ9MrGjENwVGds%2B5zTr38ca73kWR7ez_tA%40mail.gmail.com
2020-02-05 17:35:57 +13:00
Tom Lane 26a81bb815 Fix vcregress.pl for new plperl test case.
As of commit 50fc694e4, the plperl tests don't want preinstalled
plperl languages; they now install those themselves.  I'd removed the
--load-extension options from the GNUmakefile, but missed teaching
the MSVC build script about that.

Per buildfarm.
2020-01-29 19:13:34 -05:00
Robert Haas 006b9dcad4 Add jsonapi.c to Mkvcbuild.pm's @pgcommonallfiles.
My recent commit beb4699091 caused
some buildfarm breakage, as reported by Tom Lane. Try to repair.

This fix is extracted from a larger patch by Andrew Dunstan.

Discussion: http://postgr.es/m/8440ddc9-8347-ca64-1405-845d10e054cd@2ndQuadrant.com
Discussion: http://postgr.es/m/14178.1580312751@sss.pgh.pa.us
2020-01-29 10:50:17 -05:00
Peter Eisentraut cc25464763 Add exclusion to headercheck
src/include/common/unicode_combining_table.h is currently not meant to
be included standalone.  Things could be refactored to allow it, but
that would be beyond the present purpose.  So adding an exclusion here
seems best.

Discussion: https://www.postgresql.org/message-id/10754.1579535012@sss.pgh.pa.us
2020-01-24 12:23:06 +01:00
Tom Lane c32704441d Add configure probe for rl_completion_suppress_quote.
I had supposed that all versions of Readline that have filename
quoting hooks also have the rl_completion_suppress_quote variable.
But it seems OpenBSD managed to find a version someplace that does
not, so we'll have to expend a separate configure probe for that.

(Light testing suggests that this version also lacks the bugs that
make it necessary to frob that variable.  Hooray!)

Per buildfarm.
2020-01-23 18:20:57 -05:00
Tom Lane cd69ec66c8 Improve psql's tab completion for filenames.
The Readline library contains a fair amount of knowledge about how to
tab-complete filenames, but it turns out that that doesn't work too well
unless we follow its expectation that we use its filename quoting hooks
to quote and de-quote filenames.  We were trying to do such quote handling
within complete_from_files(), and that's still what we have to do if we're
using libedit, which lacks those hooks.  But for Readline, it works a lot
better if we tell Readline that single-quote is a quoting character and
then provide hooks that know the details of the quoting rules for SQL
and psql meta-commands.

Hence, resurrect the quoting hook functions that existed in the original
version of tab-complete.c (and were disabled by commit f6689a328 because
they "didn't work so well yet"), and whack on them until they do seem to
work well.

Notably, this fixes bug #16059 from Steven Winfield, who pointed out
that the previous coding would strip quote marks from filenames in SQL
COPY commands, even though they're syntactically necessary there.
Now, we not only don't do that, but we'll add a quote mark when you
tab-complete, even if you didn't type one.

Getting this to work across a range of libedit versions (and, to a
lesser extent, libreadline versions) was depressingly difficult.
It will be interesting to see whether the new regression test cases
pass everywhere in the buildfarm.

Some future patch might try to handle quoted SQL identifiers with
similar explicit quoting/dequoting logic, but that's for another day.

Patch by me, reviewed by Peter Eisentraut.

Discussion: https://postgr.es/m/16059-8836946734c02b84@postgresql.org
2020-01-23 11:07:12 -05:00
Amit Kapila 40d964ec99 Allow vacuum command to process indexes in parallel.
This feature allows the vacuum to leverage multiple CPUs in order to
process indexes.  This enables us to perform index vacuuming and index
cleanup with background workers.  This adds a PARALLEL option to VACUUM
command where the user can specify the number of workers that can be used
to perform the command which is limited by the number of indexes on a
table.  Specifying zero as a number of workers will disable parallelism.
This option can't be used with the FULL option.

Each index is processed by at most one vacuum process.  Therefore parallel
vacuum can be used when the table has at least two indexes.

The parallel degree is either specified by the user or determined based on
the number of indexes that the table has, and further limited by
max_parallel_maintenance_workers.  The index can participate in parallel
vacuum iff it's size is greater than min_parallel_index_scan_size.

Author: Masahiko Sawada and Amit Kapila
Reviewed-by: Dilip Kumar, Amit Kapila, Robert Haas, Tomas Vondra,
Mahendra Singh and Sergei Kornilov
Tested-by: Mahendra Singh and Prabhat Sahu
Discussion:
https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.com
https://postgr.es/m/CAA4eK1J-VoR9gzS5E75pcD-OH0mEyCdp8RihcwKrcuw7J-Q0+w@mail.gmail.com
2020-01-20 07:57:49 +05:30
Heikki Linnakangas 7aaefadaac Remove separate files for the initial contents of pg_(sh)description
This data was only in separate files because it was the most convenient
way to handle it with a shell script. Now that we use a general-purpose
programming language, it's easy to assemble the data into the same format
as the rest of the catalogs and output it into postgres.bki. This allows
removal of some special-purpose code from initdb.c.

Discussion: https://www.postgresql.org/message-id/CACPNZCtVFtjHre6hg9dput0qRPp39pzuyA2A6BT8wdgrRy%2BQdA%40mail.gmail.com
Author: John Naylor
2020-01-19 13:54:58 +02:00
Michael Paquier f7cd5896a6 Move OpenSSL routines for min/max protocol setting to src/common/
Two routines have been added in OpenSSL 1.1.0 to set the protocol bounds
allowed within a given SSL context:
- SSL_CTX_set_min_proto_version
- SSL_CTX_set_max_proto_version

As Postgres supports OpenSSL down to 1.0.1 (as of HEAD), equivalent
replacements exist in the tree, which are only available for the
backend.  A follow-up patch is planned to add control of the SSL
protocol bounds for libpq, so move those routines to src/common/ so as
libpq can use them.

Author: Daniel Gustafsson
Discussion: https://postgr.es/m/4F246AE3-A7AE-471E-BD3D-C799D3748E03@yesql.se
2020-01-17 10:06:17 +09:00
Tom Lane e6afa8918c Move wchar.c and encnames.c to src/common/.
Formerly, various frontend directories symlinked these two sources
and then built them locally.  That's an ancient, ugly hack, and
we now have a much better way: put them into libpgcommon.
So do that.  (The immediate motivation for this is the prospect
of having to introduce still more symlinking if we don't.)

This commit moves these two files absolutely verbatim, for ease of
reviewing the git history.  There's some follow-on work to be done
that will modify them a bit.

Robert Haas, Tom Lane

Discussion: https://postgr.es/m/CA+TgmoYO8oq-iy8E02rD8eX25T-9SmyxKWqqks5OMHxKvGXpXQ@mail.gmail.com
2020-01-16 15:58:55 -05:00
Peter Eisentraut 16a4a3d59c Remove libpq.rc, use win32ver.rc for libpq
For historical reasons, libpq used a separate libpq.rc file for the
Windows builds while all other components use a common file
win32ver.rc.  With a bit of tweaking, the libpq build can also use the
win32ver.rc file.  This removes a bit of duplicative code.

Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/ad505e61-a923-e114-9f38-9867d161073f@2ndquadrant.com
2020-01-15 15:06:12 +01:00
Bruce Momjian 7316f11be0 tools/copyright.pl: skip copyright changes for *.key files
Reported-by: Alvaro Herrera

Discussion: https://postgr.es/m/20200102184059.GA25435@alvherre.pgsql

Backpatch-through: master
2020-01-14 10:51:58 -05:00
Peter Eisentraut f85a485f89 Add support for automatically updating Unicode derived files
We currently have several sets of files generated from data provided
by Unicode.  These all have ad hoc rules and instructions for updating
when new Unicode versions appear, and it's not done consistently.

This patch centralizes and automates the process and makes it part of
the release checklist.  The Unicode and CLDR versions are specified in
Makefile.global.in.  There is a new make target "update-unicode" that
downloads all the relevant files and runs the generation script.

There is also a new script for generating the table of combining
characters for ucs_wcwidth().  That table is now in a separate include
file rather than hardcoded into the middle of other code.  This is
based on the script that was used for generating
d8594d123c, but the script itself wasn't
committed at that time.

Reviewed-by: John Naylor <john.naylor@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/c8d05f42-443e-6c23-819b-05b31759a37c@2ndquadrant.com
2020-01-09 10:08:14 +01:00
Peter Eisentraut 45223fd9ce Modernize Python exception syntax in tests
Change the exception syntax used in the tests to use the more current

    except Exception as ex:

rather than the old

    except Exception, ex:

Since support for Python <2.6 has been removed, all supported versions
now support the new style, and we can save one step in the Python 3
compatibility conversion.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/98b69261-298c-13d2-f34d-836fd9c29b21%402ndquadrant.com
2020-01-08 22:47:22 +01:00
Bruce Momjian 7559d8ebfa Update copyrights for 2020
Backpatch-through: update all files in master, backpatch legal files through 9.4
2020-01-01 12:21:45 -05:00
Peter Eisentraut 77f416af6e Clean up inconsistent backslash use in paths
Most of the MSVC Perl code uses forward slashes for file paths.  Make
the few places that use backslashes the same.  This also helps running
that code on non-Windows.
2019-12-20 12:29:07 +01:00
Peter Eisentraut 8f4fb4c648 Generate pg_config.h from pg_config.h.in on Windows
Previously, the Windows MSVC build generated pg_config.h from a
hard-coded pg_config.h.win32 with some ad hoc postprocessing.  The
pg_config.h.win32 file required manual maintenance and was as a result
frequently out of date.

Instead, have the MSVC build scripts emulate what configure and
config.status do: collect a list of defines and then create
pg_config.h from pg_config.h.in by changing the appropriate lines.

The previous setup was made to support old Windows build systems that
didn't have any text processing capabilities, but the current system
has Perl, so it's not a problem.  pg_config.h.win32 is removed.

In order to try to keep the Windows side of things more up to date in
the future, we now also require that all symbols found in
pg_config.h.in are defined in the MSVC build system.  So if there is a
change in configure that results in a new symbol, an update in
Solution.pm will be required.

The other headers managed by AC_CONFIG_HEADERS in configure, namely
src/include/pg_config_ext.h and
src/interfaces/ecpg/include/ecpg_config.h, get the same treatment, so
this removes even more ad hoc code in the MSVC build scripts.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/1441b834-f434-e0bf-46ed-9c4d5c29c2d4%402ndquadrant.com
2019-12-20 09:15:08 +01:00
Michael Paquier e1551f96e6 Refactor attribute mappings used in logical tuple conversion
Tuple conversion support in tupconvert.c is able to convert rowtypes
between two relations, inner and outer, which are logically equivalent
but have a different ordering or even dropped columns (used mainly for
inheritance tree and partitions).  This makes use of attribute mappings,
which are simple arrays made of AttrNumber elements with a length
matching the number of attributes of the outer relation.  The length of
the attribute mapping has been treated as completely independent of the
mapping itself until now, making it easy to pass down an incorrect
mapping length.

This commit refactors the code related to attribute mappings and moves
it into an independent facility called attmap.c, extracted from
tupconvert.c.  This merges the attribute mapping with its length,
avoiding to try to guess what is the length of a mapping to use as this
is computed once, when the map is built.

This will avoid mistakes like what has been fixed in dc816e58, which has
used an incorrect mapping length by matching it with the number of
attributes of an inner relation (a child partition) instead of an outer
relation (a partitioned table).

Author: Michael Paquier
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/20191121042556.GD153437@paquier.xyz
2019-12-18 16:23:02 +09:00
Peter Eisentraut f14413b684 Sort out getpeereid() and peer auth handling on Windows
The getpeereid() uses have so far been protected by HAVE_UNIX_SOCKETS,
so they didn't ever care about Windows support.  But in anticipation
of Unix-domain socket support on Windows, that needs to be handled
differently.

Windows doesn't support getpeereid() at this time, so we use the
existing not-supported code path.  We let configure do its usual thing
of picking up the replacement from libpgport, instead of the custom
overrides that it was doing before.

But then Windows doesn't have struct passwd, so this patch sprinkles
some additional #ifdef WIN32 around to make it work.  This is similar
to existing code that deals with this issue.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/5974caea-1267-7708-40f2-6009a9d653b0@2ndquadrant.com
2019-12-16 09:36:08 +01:00
Peter Eisentraut 877b61e9ce Cosmetic cleaning of pg_config.h.win32
Clean up some comments (some generated by old versions of autoconf)
and some random ordering differences, so it's easier to diff this
against the default pg_config.h or pg_config.h.in.  Remove LOCALEDIR
handling from pg_config.h.win32 altogether because it's already in
pg_config_paths.h.
2019-12-10 21:24:25 +01:00
Alvaro Herrera 9a79823496 Blind attempt at fixing ecpg/compatlib's build
It now needs libpgcommon in order to get pnstrdup.

Per buildfarm.
2019-12-04 20:16:24 -03:00
Michael Paquier a4fd3aa719 Refactor query cancellation code into src/fe_utils/
Originally, this code was duplicated in src/bin/psql/ and
src/bin/scripts/, but it can be useful for other frontend applications,
like pgbench.  This refactoring offers the possibility to setup a custom
callback which would get called in the signal handler for SIGINT or when
the interruption console events happen on Windows.

Author: Fabien Coelho, with contributions from Michael Paquier
Reviewed-by: Álvaro Herrera, Ibrar Ahmed
Discussion: https://postgr.es/m/alpine.DEB.2.21.1910311939430.27369@lancre
2019-12-02 11:18:56 +09:00
Peter Eisentraut 4513d8b07b Move configure --disable-float8-byval to pg_config_manual.h
This build option was once useful to maintain compatibility with
version-0 functions, but those are no longer supported, so this option
is no longer useful for end users.  We keep the option available to
developers in pg_config_manual.h so that it is easy to test the
pass-by-reference code paths without having to fire up a 32-bit
machine.

Discussion: https://www.postgresql.org/message-id/flat/f3e1e576-2749-bbd7-2d57-3f9dcf75255a@2ndquadrant.com
2019-11-27 12:27:20 +01:00
Michael Paquier 4cb658af70 Refactor reloption handling for index AMs in-core
This reworks the reloption parsing and build of a couple of index AMs by
creating new structures for each index AM's options.  This split was
already done for BRIN, GIN and GiST (which actually has a fillfactor
parameter), but not for hash, B-tree and SPGiST which relied on
StdRdOptions due to an overlap with the default option set.

This saves a couple of bytes for rd_options in each relcache entry with
indexes making use of relation options, and brings more consistency
between all index AMs.  While on it, add a couple of AssertMacro() calls
to make sure that utility macros to grab values of reloptions are used
with the expected index AM.

Author: Nikolay Shaplov
Reviewed-by: Amit Langote, Michael Paquier, Álvaro Herrera, Dent John
Discussion: https://postgr.es/m/4127670.gFlpRb6XCm@x200m
2019-11-25 09:40:53 +09:00
Peter Eisentraut 2e4db241bf Remove configure --disable-float4-byval
This build option was only useful to maintain compatibility for
version-0 functions, but those are no longer supported, so this option
can be removed.

float4 is now always pass-by-value; the pass-by-reference code path is
completely removed.

Discussion: https://www.postgresql.org/message-id/flat/f3e1e576-2749-bbd7-2d57-3f9dcf75255a@2ndquadrant.com
2019-11-21 18:29:21 +01:00
Tom Lane 7a0574b50e Fix ecpglib.h to declare bool consistently with c.h.
This completes the task begun in commit 1408d5d86, to synchronize
ECPG's exported definitions with the definition of bool used by
c.h (and, therefore, the one actually in use in the ECPG library).
On practically all modern platforms, ecpglib.h will now just
include <stdbool.h>, which should surprise nobody anymore.
That removes a header-inclusion-order hazard for ECPG clients,
who previously might get build failures or unexpected behavior
depending on whether they'd included <stdbool.h> themselves,
and if so, whether before or after ecpglib.h.

On platforms where sizeof(_Bool) is not 1 (only old PPC-based
Mac systems, as far as I know), things are still messy, as
inclusion of <stdbool.h> could still break ECPG client code.
There doesn't seem to be any clean fix for that, and given the
probably-negligible population of users who would care anymore,
it's not clear we should go far out of our way to cope with it.
This change at least fixes some header-inclusion-order hazards
for our own code, since c.h and ecpglib.h previously disagreed
on whether bool should be char or unsigned char.

To implement this with minimal invasion of ECPG client namespace,
move the choice of whether to rely on <stdbool.h> into configure,
and have it export a configuration symbol PG_USE_STDBOOL.

ecpglib.h no longer exports definitions for TRUE and FALSE,
only their lowercase brethren.  We could undo that if we get
push-back about it.

Ideally we'd back-patch this as far as v11, which is where c.h
started to rely on <stdbool.h>.  But the odds of creating problems
for formerly-working ECPG client code seem about as large as the
odds of fixing any non-working cases, so we'll just do this in HEAD.

Discussion: https://postgr.es/m/CAA4eK1LmaKO7Du9M9Lo=kxGU8sB6aL8fa3sF6z6d5yYYVe3BuQ@mail.gmail.com
2019-11-12 13:00:04 -05:00
Andres Freund 26aaf97b68 Make StringInfo available to frontend code.
There's plenty places in frontend code that could benefit from a
string buffer implementation. Some because it yields simpler and
faster code, and some others because of the desire to share code
between backend and frontend.

While there is a string buffer implementation available to frontend
code, libpq's PQExpBuffer, it is clunkier than stringinfo, it
introduces a libpq dependency, doesn't allow for sharing between
frontend and backend code, and has a higher API/ABI stability
requirement due to being exposed via libpq.

Therefore it seems best to just making StringInfo being usable by
frontend code. There's not much to do for that, except for rewriting
two subsequent elog/ereport calls into others types of error
reporting, and deciding on a maximum string length.

For the maximum string size I decided to privately define MaxAllocSize
to the same value as used in the backend. It seems likely that we'll
want to reconsider this for both backend and frontend code in the not
too far away future.

For now I've left stringinfo.h in lib/, rather than common/, to reduce
the likelihood of unnecessary breakage. We could alternatively decide
to provide a redirecting stringinfo.h in lib/, or just not provide
compatibility.

Author: Andres Freund
Reviewed-By: Kyotaro Horiguchi, Daniel Gustafsson
Discussion: https://postgr.es/m/20190920051857.2fhnvhvx4qdddviz@alap3.anarazel.de
2019-11-05 14:56:40 -08:00
Andres Freund 01368e5d9d Split all OBJS style lines in makefiles into one-line-per-entry style.
When maintaining or merging patches, one of the most common sources
for conflicts are the list of objects in makefiles. Especially when
the split across lines has been changed on both sides, which is
somewhat common due to attempting to stay below 80 columns, those
conflicts are unnecessarily laborious to resolve.

By splitting, and alphabetically sorting, OBJS style lines into one
object per line, conflicts should be less frequent, and easier to
resolve when they still occur.

Author: Andres Freund
Discussion: https://postgr.es/m/20191029200901.vww4idgcxv74cwes@alap3.anarazel.de
2019-11-05 14:41:07 -08:00
Noah Misch 30ee5d17c2 For all ppc compilers, implement compare_exchange and fetch_add with asm.
This is more like how we handle s_lock.h and arch-x86.h.

Reviewed by Tom Lane.

Discussion: https://postgr.es/m/20191005173400.GA3979129@rfd.leadboat.com
2019-10-18 20:20:52 -07:00
Peter Eisentraut 38d8dce61f Remove some code for old unsupported versions of MSVC
As of d9dd406fe2, we require MSVC 2013,
which means _MSC_VER >= 1800.  This means that conditionals about
older versions of _MSC_VER can be removed or simplified.

Previous code was also in some cases handling MinGW, where _MSC_VER is
not defined at all, incorrectly, such as in pg_ctl.c and win32_port.h,
leading to some compiler warnings.  This should now be handled better.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
2019-10-08 10:50:54 +02:00
Peter Eisentraut cc4ec2d29a Fix incorrect use of term HEAD for Git
HEAD as used here was CVS terminology.  Now we mean master.
2019-10-07 09:44:17 +02:00
Andrew Dunstan ad7595b890 Handle spaces in OpenSSL install location for MSVC
First, make sure that the .exe name is quoted when trying to get the
version number. Also, don't quote the lib name for using in the project
files if it's already been quoted. This second change applies to all
libraries, not just OpenSSL.

This has clearly been broken forever, so backpatch to all live branches.
2019-10-04 15:34:40 -04:00
Alvaro Herrera caba97a9d9 Split out recovery confing-writing code from pg_basebackup
... into a new file, fe_utils/recovery_gen.c.

This can later be used by pg_rewind.

Authors: Paul Guo, Jimmy Yih, Ashwin Agrawal.  A few tweaks by Álvaro Herrera
Reviewed-by: Michaël Paquier
Discussion: https://postgr.es/m/CAEET0ZEffUkXc48pg2iqARQgGRYDiiVxDu+yYek_bTwJF+q=Uw@mail.gmail.com
2019-09-25 14:35:24 -03:00
Michael Paquier 640c19869f Add dummy_index_am to src/test/modules/
This includes more tests dedicated to relation options, bringing the
coverage of this code close to 100%, and the module can be used for
other purposes, like a base template for an index AM implementation.

Author: Nikolay Sharplov, Michael Paquier
Reviewed-by: Álvaro Herrera, Dent John
Discussion: https://postgr.es/m/17071942.m9zZutALE6@x200m
2019-09-25 12:11:12 +09:00
Alexander Korotkov 6cae9d2c10 Improve handling of NULLs in KNN-GiST and KNN-SP-GiST
This commit improves subject in two ways:

 * It removes ugliness of 02f90879e7, which stores distance values and null
   flags in two separate arrays after GISTSearchItem struct.  Instead we pack
   both distance value and null flag in IndexOrderByDistance struct.  Alignment
   overhead should be negligible, because we typically deal with at most few
   "col op const" expressions in ORDER BY clause.
 * It fixes handling of "col op NULL" expression in KNN-SP-GiST.  Now, these
   expression are not passed to support functions, which can't deal with them.
   Instead, NULL result is implicitly assumed.  It future we may decide to
   teach support functions to deal with NULL arguments, but current solution is
   bugfix suitable for backpatch.

Reported-by: Nikita Glukhov
Discussion: https://postgr.es/m/826f57ee-afc7-8977-c44c-6111d18b02ec%40postgrespro.ru
Author: Nikita Glukhov
Reviewed-by: Alexander Korotkov
Backpatch-through: 9.4
2019-09-19 21:48:39 +03:00
Michael Paquier 9d6e1ec5ce Expand properly list of TAP tests used for prove in vcregress.pl
Depending on the system used, t/*.pl may not be expanded into a list of
tests which can be consumed by prove when attempting to run TAP tests on
a given path.  Fix that by using glob() directly in the script, to make
sure that a complete list of tests is provided.  This has not proved to
be an issue with MSVC as the list was properly expanded, but it is on
Linux with perl's system().

This is extracted from a larger patch.

Author: Tom Lane
Discussion: https://postgr.es/m/6628.1567958876@sss.pgh.pa.us
Backpatch-through: 9.4
2019-09-11 11:07:18 +09:00
Tom Lane db43831899 Avoid using INFO elevel for what are fundamentally debug messages.
Commit 6f6b99d13 stuck an INFO message into the fast path for
checking partition constraints, for no very good reason except
that it made it easy for the regression tests to verify that
that path was taken.  Assorted later patches did likewise,
increasing the unsuppressable-chatter level from ALTER TABLE
even more.  This isn't good for the user experience, so let's
drop these messages down to DEBUG1 where they belong.  So as
not to have a loss of test coverage, create a TAP test that
runs the relevant queries with client_min_messages = DEBUG1
and greps for the expected messages.

This testing method is a bit brute-force --- in particular,
it duplicates the execution of a fair amount of the core
create_table and alter_table tests.  We experimented with
other solutions, but running any significant amount of
standard testing with client_min_messages = DEBUG1 seems
to have a lot of output-stability pitfalls, cf commits
bbb96c370 and 5655565c0.  Possibly at some point we'll look
into whether we can reduce the amount of test duplication.

Backpatch into v12, because some of these messages are new
in v12 and we don't really want to ship it that way.

Sergei Kornilov

Discussion: https://postgr.es/m/81911511895540@web58j.yandex.ru
Discussion: https://postgr.es/m/4859321552643736@myt5-02b80404fd9e.qloud-c.yandex.net
2019-09-07 19:03:11 -04:00
Robert Haas bd124996ef Create an API for inserting and deleting rows in TOAST tables.
This moves much of the non-heap-specific logic from toast_delete and
toast_insert_or_update into a helper functions accessible via a new
header, toast_helper.h.  Using the functions in this module, a table
AM can implement creation and deletion of TOAST table rows with
much less code duplication than was possible heretofore.  Some
table AMs won't want to use the TOAST logic at all, but for those
that do this will make that easier.

Patch by me, reviewed and tested by Prabhat Sabu, Thomas Munro,
Andres Freund, and Álvaro Herrera.

Discussion: http://postgr.es/m/CA+TgmoZv-=2iWM4jcw5ZhJeL18HF96+W1yJeYrnGMYdkFFnEpQ@mail.gmail.com
2019-09-06 10:38:51 -04:00
Peter Eisentraut 74a308cf52 Use explicit_bzero
Use the explicit_bzero() function in places where it is important that
security information such as passwords is cleared from memory.  There
might be other places where it could be useful; this is just an
initial collection.

For platforms that don't have explicit_bzero(), provide various
fallback implementations.  (explicit_bzero() itself isn't standard,
but as Linux/glibc, FreeBSD, and OpenBSD have it, it's the most common
spelling, so it makes sense to make that the invocation point.)

Discussion: https://www.postgresql.org/message-id/flat/42d26bde-5d5b-c90d-87ae-6cab875f73be%402ndquadrant.com
2019-09-05 08:30:42 +02:00
Peter Eisentraut c45643d618 Remove configure detection of crypt()
crypt() hasn't been needed since crypt detection was removed from
PostgreSQL, so these configure checks are not necessary.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/21f88934-f00c-27f6-a9d8-7ea06d317781%402ndquadrant.com
2019-08-21 21:36:54 +02:00
Tom Lane 55ea109188 Add "headerscheck" script to test header-file compilability under C.
We already had "cpluspluscheck", which served the dual purposes of
verifying that headers compile standalone and that they compile as C++.
However, C++ compilers don't have the exact same set of error conditions
as C compilers, so this doesn't really prove that a header will compile
standalone as C.

Hence, add a second script that's largely similar but runs the C
compiler not C++.

Also add a bit more documentation than the none-at-all we had before.

Discussion: https://postgr.es/m/14803.1566175851@sss.pgh.pa.us
2019-08-19 14:22:56 -04:00
Tom Lane a120791096 Use zic's new "-b slim" option to generate smaller timezone files.
IANA tzcode release 2019b adds an option that tells zic not to emit
the old 32-bit section of the timezone files, and to skip some other
space-wasting hacks needed for compatibility with old timezone client
libraries.  Since we only expect our own code to use the timezone data
we install, and our code is up-to-date with 2019b, there's no apparent
reason not to generate the smallest possible files.

Unfortunately, while the individual zone files do get significantly
smaller in many cases, they were not that big to begin with; which
means that no real space savings ensues on filesystems that don't
optimize small files.  (For instance, on ext4 with 4K block size,
"du" says the installed timezone tree is the same size as before.)
Still, it seems worth making the change, if only because this is
presumably the wave of the future.  At the very least, we'll save
some cycles while reading a zone file.

But given the marginal value and the fact that this is a new code
path, it doesn't seem worth the risk of back-patching this change
into stable branches.  Hence, unlike most of our timezone-related
changes, apply to HEAD only.

Discussion: https://postgr.es/m/24998.1563403327@sss.pgh.pa.us
2019-08-19 13:17:02 -04:00
Michael Paquier c96581abe4 Fix inconsistencies and typos in the tree, take 11
This fixes various typos in docs and comments, and removes some orphaned
definitions.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/5da8e325-c665-da95-21e0-c8a99ea61fbf@gmail.com
2019-08-19 16:21:39 +09:00
Michael Paquier 66bde49d96 Fix inconsistencies and typos in the tree, take 10
This addresses some issues with unnecessary code comments, fixes various
typos in docs and comments, and removes some orphaned structures and
definitions.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/9aabc775-5494-b372-8bcb-4dfc0bd37c68@gmail.com
2019-08-13 13:53:41 +09:00
Michael Paquier 8548ddc61b Fix inconsistencies and typos in the tree, take 9
This addresses more issues with code comments, variable names and
unreferenced variables.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/7ab243e0-116d-3e44-d120-76b3df7abefd@gmail.com
2019-08-05 12:14:58 +09:00
Michael Paquier eb43f3d193 Fix inconsistencies and typos in the tree
This is numbered take 8, and addresses again a set of issues with code
comments, variable names and unreferenced variables.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/b137b5eb-9c95-9c2f-586e-38aba7d59788@gmail.com
2019-07-29 12:28:30 +09:00
Peter Eisentraut 959f6d6a18 pg_upgrade: Default new bindir to pg_upgrade location
Make the directory where the pg_upgrade binary resides the default for
new bindir, as running the pg_upgrade binary from where the new
cluster is installed is a very common scenario.  Setting this as the
defauly bindir for the new cluster will remove the need to provide it
explicitly via -B in many cases.

To support directories being missing from option parsing, extend the
directory check with a missingOk mode where the path must be filled at
a later point before being used.  Also move the exec_path check to
earlier in setup to make sure we know the new cluster bindir when we
scan for required executables.

This removes the exec_path from the OSInfo struct as it is not used
anywhere.

Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/9328.1552952117@sss.pgh.pa.us
2019-07-27 08:19:04 +02:00
Tom Lane cb9bb15783 Fix syntax error in commit 20e99cddd.
Per buildfarm.
2019-07-25 14:42:02 -04:00
Andrew Dunstan 20e99cdddb Honor MSVC WindowsSDKVersion if set
Add a line to the project file setting the target SDK. Otherwise, in for
example VS2017, if the default but optional 8.1 SDK is not installed the
build will fail.

Patch from Peifeng Qiu, slightly edited by me.

Discussion: https://postgr.es/m/CABmtVJhw1boP_bd4=b3Qv5YnqEdL696NtHFi2ruiyQ6mFHkeQQ@mail.gmail.com

Backpatch to all live branches.
2019-07-25 11:38:43 -04:00
Michael Paquier 39aadc9842 Fix some inconsistencies in MSVC scripts
In configure scripts, --with-ossp-uuid is obsolete is replaced by
--with-uuid, and it needs to specify a path to its library builds when
building with the MSVC scripts.  --with-perl needs also to specify a
path.

Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20190712.121529.194600624.horikyota.ntt@gmail.com
2019-07-13 16:51:31 +09:00
Michael Paquier 6b8548964b Fix inconsistencies in the code
This addresses a couple of issues in the code:
- Typos and inconsistencies in comments and function declarations.
- Removal of unreferenced function declarations.
- Removal of unnecessary compile flags.
- A cleanup error in regressplans.sh.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/0c991fdf-2670-1997-c027-772a420c4604@gmail.com
2019-07-08 13:15:09 +09:00
Michael Paquier 2b1394fc2b Add support for Visual Studio 2019 in build scripts
This fixes at the same time a set of inconsistencies in the
documentation and the scripts related to the versions of Windows SDK
supported.

Author: Haribabu Kommi
Reviewed-by: Andrew Dunstan, Juan José Santamaría Flecha, Michael
Paquier
Discussion: https://postgr.es/m/CAJrrPGcfqXhfPyMrny9apoDU7M1t59dzVAvoJ9AeAh5BJi+UzA@mail.gmail.com
2019-07-02 14:02:33 +09:00
Michael Paquier 9adda24543 Refactor code of reindexdb for query generation
This merges the portion related to REINDEX SYSTEM into the routine
already available for all the other reindex types, making the query
generation cleaner.  While on it, change the handling of the reindex
types using an enum, which allows to get rid of the hardcoded strings
used directly in the query generation present for the same purpose (aka
"TABLE", "DATABASE", etc.).

Per discussion with Julien Rouhaud, Tom Lane, Alvaro Herrera and me.

Author: Julien Rouhaud
Discussion: https://postgr.es/m/CAOBaU_bSmSik_WRK9niDnm-3NkNZky6+uKxkmQwvthZvMWpS5A@mail.gmail.com
2019-07-02 11:36:53 +09:00
Tom Lane 615cebc94b Stamp HEAD as 13devel.
Let the hacking begin ...
2019-07-01 12:50:55 -04:00
Tom Lane 9e1c9f9594 pgindent run prior to branching v12.
pgperltidy and reformat-dat-files too, though the latter didn't
find anything to change.
2019-07-01 12:37:52 -04:00
Tom Lane c000a47ad1 Exclude new src/test/modules/unsafe_tests directory from MSVC build.
There's nothing to build here, and that was confusing AddContrib().
Per buildfarm.
2019-06-30 14:05:24 -04:00
Michael Paquier d993e0fb82 Add support for OpenSSL 1.1.0 and newer versions in MSVC scripts
Up to now, the MSVC build scripts are able to support only one fixed
version of OpenSSL, and they lacked logic to detect the version of
OpenSSL a given compilation of Postgres is linking to (currently 1.0.2,
the latest LTS of upstream which will be EOL'd at the end of 2019).

This commit adds more logic to detect the version of OpenSSL used by a
build and makes use of it to add support for compilation with OpenSSL
1.1.0 which requires a new set of compilation flags to work properly.

The supported OpenSSL installers have changed their library layer with
various library renames with the upgrade to 1.1.0, making the logic a
bit more complicated.  The scripts are now able to adapt to the new
world order.

Reported-by: Sergey Pashkov
Author: Juan José Santamaría Flecha, Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/15789-8fc75dea3c5a17c8@postgresql.org
2019-06-26 10:44:46 +09:00
Michael Paquier 414cca40d5 Remove last references to WAL segment size in MSVC scripts
fc49e24 has removed the last use of this compile-time variable as WAL
segment size is something that can now be set at initdb time, still this
commit has forgotten some references to it.

Discussion: https://postgr.es/m/20190617073228.GE18917@paquier.xyz
2019-06-19 11:18:50 +09:00
Michael Paquier 3412030205 Fix more typos and inconsistencies in the tree
Author: Alexander Lakhin
Discussion: https://postgr.es/m/0a5419ea-1452-a4e6-72ff-545b1a5a8076@gmail.com
2019-06-17 16:13:16 +09:00
Tomas Vondra 6cbfb784c3 Rework the pg_statistic_ext catalog
Since extended statistic got introduced in PostgreSQL 10, there was a
single catalog pg_statistic_ext storing both the definitions and built
statistic.  That's however problematic when a user is supposed to have
access only to the definitions, but not to user data.

Consider for example pg_dump on a database with RLS enabled - if the
pg_statistic_ext catalog respects RLS (which it should, if it contains
user data), pg_dump would not see any records and the result would not
define any extended statistics.  That would be a surprising behavior.

Until now this was not a pressing issue, because the existing types of
extended statistic (functional dependencies and ndistinct coefficients)
do not include any user data directly.  This changed with introduction
of MCV lists, which do include most common combinations of values.

The easiest way to fix this is to split the pg_statistic_ext catalog
into two - one for definitions, one for the built statistic values.
The new catalog is called pg_statistic_ext_data, and we're maintaining
a 1:1 relationship with the old catalog - either there are matching
records in both catalogs, or neither of them.

Bumped CATVERSION due to changing system catalog definitions.

Author: Dean Rasheed, with improvements by me
Reviewed-by: Dean Rasheed, John Naylor
Discussion: https://postgr.es/m/CAEZATCUhT9rt7Ui%3DVdx4N%3D%3DVV5XOK5dsXfnGgVOz_JhAicB%3DZA%40mail.gmail.com
2019-06-16 01:20:31 +02:00
Noah Misch ae78a9456c MSVC: Reconcile clean.bat with PostgreSQL 12 work. 2019-06-09 15:50:54 -07:00
Tom Lane f4755a2c01 Make cpluspluscheck more portable.
Teach it to scrape -I and -D switches from CPPFLAGS in Makefile.global.
This is useful for testing on, eg, FreeBSD, where you won't get far
without "-I/usr/local/include".

Also, expand the set of blacklisted-for-unportability atomics headers,
based on noting that arch-x86.h fails to compile on an ARM box.  The
other ones I'd omitted seem to compile all right on architectures they
don't belong to, but that's surely too shaky to rely on.  Let's do
like we did for the src/include/port/ headers, and ignore all except
the variant that's pulled in by the arch-independent header.
2019-06-02 13:45:01 -04:00
Tom Lane 10a53cae99 Un-break ecpg tests for Windows.
Declaring a function "inline" still doesn't work with Windows compilers
(C99? what's that?), unless the macro provided by pg_config.h is
in-scope, which it is not in our ECPG test programs.  So the workaround
I tried to use in commit 7640f9312 doesn't work for Windows.  Revert
the change in printf_hack.h, and instead just blacklist that file
in cpluspluscheck --- since it's a not-installed test file, we don't
really need to verify its C++ cleanliness anyway.
2019-06-02 11:07:54 -04:00
Tom Lane 6f54b80edd Improve coverage of cpluspluscheck.
Formerly, cpluspluscheck was only meant to examine headers that
we thought of as exported --- but its notion of what we export
was well behind the times.  Let's just make it check *all* .h
files, except for a well-defined blacklist, instead.

While at it, improve its ability to use a C++ compiler other than g++,
by scraping the CXX setting from Makefile.global and making it possible
to override the warning options used (per suggestion from Andres Freund).

Discussion: https://postgr.es/m/b517ec3918d645eb950505eac8dd434e@gaz-is.ru
2019-05-31 16:32:07 -04:00
Andres Freund b1cd7ce23f Integrate cpluspluscheck into build system.
Previously cpluspluscheck wouldn't work in vpath builds, this commit
fixes that. To make it easier to invoke, there's a top-level
cpluspluscheck target.

Discussion: https://postgr.es/20190530220244.kiputcbl4gkl2oo6@alap3.anarazel.de
2019-05-31 12:36:17 -07:00
Tom Lane 9e58705a7f Make our perfect hash functions be valid C++.
While C is happy to cast "const void *" to "const unsigned char *"
silently, C++ insists on an explicit cast.  Since we put these
functions into header files, cpluspluscheck whines about that.
Add the cast to pacify it.

Discussion: https://postgr.es/m/b517ec3918d645eb950505eac8dd434e@gaz-is.ru
2019-05-31 10:40:00 -04:00
Noah Misch 40b132c1af In the pg_upgrade test suite, don't write to src/test/regress.
When this suite runs installcheck, redirect file creations from
src/test/regress to src/bin/pg_upgrade/tmp_check/regress.  This closes a
race condition in "make -j check-world".  If the pg_upgrade suite wrote
to a given src/test/regress/results file in parallel with the regular
src/test/regress invocation writing it, a test failed spuriously.  Even
without parallelism, in "make -k check-world", the suite finishing
second overwrote the other's regression.diffs.  This revealed test
"largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too.

Buildfarm client REL_10, released fifty-four days ago, supports saving
regression.diffs from its new location.  When an older client reports a
pg_upgradeCheck failure, it will no longer include regression.diffs.
Back-patch to 9.5, where pg_upgrade moved to src/bin.

Reviewed (in earlier versions) by Andrew Dunstan.

Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com
2019-05-28 12:59:00 -07:00
Noah Misch 10b72deafe In the pg_upgrade test suite, remove and recreate "tmp_check".
This allows "vcregress upgradecheck" to pass twice in immediate
succession, and it's more like how $(prove_check) works.  Back-patch to
9.5, where pg_upgrade moved to src/bin.

Discussion: https://postgr.es/m/20190520012436.GA1480421@rfd.leadboat.com
2019-05-28 12:58:30 -07:00
Tom Lane db6e2b4c52 Initial pgperltidy run for v12.
Make all the perl code look nice, too (for some value of "nice").
2019-05-22 13:36:19 -04:00
Tom Lane 8255c7a5ee Phase 2 pgindent run for v12.
Switch to 2.1 version of pg_bsd_indent.  This formats
multiline function declarations "correctly", that is with
additional lines of parameter declarations indented to match
where the first line's left parenthesis is.

Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com
2019-05-22 13:04:48 -04:00
Tom Lane be76af171c Initial pgindent run for v12.
This is still using the 2.0 version of pg_bsd_indent.
I thought it would be good to commit this separately,
so as to document the differences between 2.0 and 2.1 behavior.

Discussion: https://postgr.es/m/16296.1558103386@sss.pgh.pa.us
2019-05-22 12:55:34 -04:00
Peter Eisentraut 8bbb8166b6 Remove bug.template file
It's outdated and not really in use anymore.

Discussion: https://www.postgresql.org/message-id/flat/cf7ed2b1-1ebe-83cf-e05e-d5943f67af2d%402ndquadrant.com
2019-05-20 08:58:21 +02:00
Noah Misch ae35e1c9d7 Revert "In the pg_upgrade test suite, don't write to src/test/regress."
This reverts commit bd1592e857.  It had
multiple defects.

Discussion: https://postgr.es/m/12717.1558304356@sss.pgh.pa.us
2019-05-19 15:24:42 -07:00
Noah Misch bd1592e857 In the pg_upgrade test suite, don't write to src/test/regress.
When this suite runs installcheck, redirect file creations from
src/test/regress to src/bin/pg_upgrade/tmp_check/regress.  This closes a
race condition in "make -j check-world".  If the pg_upgrade suite wrote
to a given src/test/regress/results file in parallel with the regular
src/test/regress invocation writing it, a test failed spuriously.  Even
without parallelism, in "make -k check-world", the suite finishing
second overwrote the other's regression.diffs.  This revealed test
"largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too.

Buildfarm client REL_10, released forty-five days ago, supports saving
regression.diffs from its new location.  When an older client reports a
pg_upgradeCheck failure, it will no longer include regression.diffs.
Back-patch to 9.5, where pg_upgrade moved to src/bin.

Reviewed by Andrew Dunstan.

Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com
2019-05-19 14:36:44 -07:00
Tom Lane fc9a62af3f Move logging.h and logging.c from src/fe_utils/ to src/common/.
The original placement of this module in src/fe_utils/ is ill-considered,
because several src/common/ modules have dependencies on it, meaning that
libpgcommon and libpgfeutils now have mutual dependencies.  That makes it
pointless to have distinct libraries at all.  The intended design is that
libpgcommon is lower-level than libpgfeutils, so only dependencies from
the latter to the former are acceptable.

We already have the precedent that fe_memutils and a couple of other
modules in src/common/ are frontend-only, so it's not stretching anything
out of whack to treat logging.c as a frontend-only module in src/common/.
To the extent that such modules help provide a common frontend/backend
environment for the rest of common/ to use, it's a reasonable design.
(logging.c does not yet provide an ereport() emulation, but one can
dream.)

Hence, move these files over, and revert basically all of the build-system
changes made by commit cc8d41511.  There are no places that need to grow
new dependencies on libpgcommon, further reinforcing the idea that this
is the right solution.

Discussion: https://postgr.es/m/a912ffff-f6e4-778a-c86a-cf5c47a12933@2ndquadrant.com
2019-05-14 14:20:10 -04:00
Noah Misch 34ff542a71 MSVC: Build ~35% faster by calling dumpbin just once per directory.
Peifeng Qiu

Discussion: https://postgr.es/m/CABmtVJiKXQjast0dQD-8KAtfm8XmyYxo-4Dc7+M+fBr8JRTqkw@mail.gmail.com
2019-05-03 21:56:47 -07:00
Noah Misch c098509927 Consistently test for in-use shared memory.
postmaster startup scrutinizes any shared memory segment recorded in
postmaster.pid, exiting if that segment matches the current data
directory and has an attached process.  When the postmaster.pid file was
missing, a starting postmaster used weaker checks.  Change to use the
same checks in both scenarios.  This increases the chance of a startup
failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1
postmaster.pid` && rm postmaster.pid && pg_ctl -w start".  A postmaster
will no longer stop if shmat() of an old segment fails with EACCES.  A
postmaster will no longer recycle segments pertaining to other data
directories.  That's good for production, but it's bad for integration
tests that crash a postmaster and immediately delete its data directory.
Such a test now leaks a segment indefinitely.  No "make check-world"
test does that.  win32_shmem.c already avoided all these problems.  In
9.6 and later, enhance PostgresNode to facilitate testing.  Back-patch
to 9.4 (all supported versions).

Reviewed (in earlier versions) by Daniel Gustafsson and Kyotaro HORIGUCHI.

Discussion: https://postgr.es/m/20190408064141.GA2016666@rfd.leadboat.com
2019-04-12 22:36:38 -07:00
Noah Misch 82150a05be Revert "Consistently test for in-use shared memory."
This reverts commits 2f932f71d9,
16ee6eaf80 and
6f0e190056.  The buildfarm has revealed
several bugs.  Back-patch like the original commits.

Discussion: https://postgr.es/m/20190404145319.GA1720877@rfd.leadboat.com
2019-04-05 00:00:52 -07:00
Andres Freund 86b85044e8 tableam: Add table_multi_insert() and revamp/speed-up COPY FROM buffering.
This adds table_multi_insert(), and converts COPY FROM, the only user
of heap_multi_insert, to it.

A simple conversion of COPY FROM use slots would have yielded a
slowdown when inserting into a partitioned table for some
workloads. Different partitions might need different slots (both slot
types and their descriptors), and dropping / creating slots when
there's constant partition changes is measurable.

Thus instead revamp the COPY FROM buffering for partitioned tables to
allow to buffer inserts into multiple tables, flushing only when
limits are reached across all partition buffers. By only dropping
slots when there've been inserts into too many different partitions,
the aforementioned overhead is gone. By allowing larger batches, even
when there are frequent partition changes, we actuall speed such cases
up significantly.

By using slots COPY of very narrow rows into unlogged / temporary
might slow down very slightly (due to the indirect function calls).

Author: David Rowley, Andres Freund, Haribabu Kommi
Discussion:
    https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
    https://postgr.es/m/20190327054923.t3epfuewxfqdt22e@alap3.anarazel.de
2019-04-04 16:28:18 -07:00
Thomas Munro 3eb77eba5a Refactor the fsync queue for wider use.
Previously, md.c and checkpointer.c were tightly integrated so that
fsync calls could be handed off and processed in the background.
Introduce a system of callbacks and file tags, so that other modules
can hand off fsync work in the same way.

For now only md.c uses the new interface, but other users are being
proposed.  Since there may be use cases that are not strictly SMGR
implementations, use a new function table for sync handlers rather
than extending the traditional SMGR one.

Instead of using a bitmapset of segment numbers for each RelFileNode
in the checkpointer's hash table, make the segment number part of the
key.  This requires sending explicit "forget" requests for every
segment individually when relations are dropped, but suits the file
layout schemes of proposed future users better (ie sparse or high
segment numbers).

Author: Shawn Debnath and Thomas Munro
Reviewed-by: Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/CAEepm=2gTANm=e3ARnJT=n0h8hf88wqmaZxk0JYkxw+b21fNrw@mail.gmail.com
2019-04-04 23:38:38 +13:00
Noah Misch 2f932f71d9 Consistently test for in-use shared memory.
postmaster startup scrutinizes any shared memory segment recorded in
postmaster.pid, exiting if that segment matches the current data
directory and has an attached process.  When the postmaster.pid file was
missing, a starting postmaster used weaker checks.  Change to use the
same checks in both scenarios.  This increases the chance of a startup
failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1
postmaster.pid` && rm postmaster.pid && pg_ctl -w start".  A postmaster
will no longer recycle segments pertaining to other data directories.
That's good for production, but it's bad for integration tests that
crash a postmaster and immediately delete its data directory.  Such a
test now leaks a segment indefinitely.  No "make check-world" test does
that.  win32_shmem.c already avoided all these problems.  In 9.6 and
later, enhance PostgresNode to facilitate testing.  Back-patch to 9.4
(all supported versions).

Reviewed by Daniel Gustafsson and Kyotaro HORIGUCHI.

Discussion: https://postgr.es/m/20130911033341.GD225735@tornado.leadboat.com
2019-04-03 17:03:46 -07:00
Stephen Frost b0b39f72b9 GSSAPI encryption support
On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.

Add frontend and backend encryption support functions.  Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.

In postmaster, pull GSSAPI authorization checking into a shared
function.  Also share the initiator name between the encryption and
non-encryption codepaths.

For HBA, add "hostgssenc" and "hostnogssenc" entries that behave
similarly to their SSL counterparts.  "hostgssenc" requires either
"gss", "trust", or "reject" for its authentication.

Similarly, add a "gssencmode" parameter to libpq.  Supported values are
"disable", "require", and "prefer".  Notably, negotiation will only be
attempted if credentials can be acquired.  Move credential acquisition
into its own function to support this behavior.

Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring
if GSSAPI authentication was used, what principal was used, and if
encryption is being used on the connection.

Finally, add documentation for everything new, and update existing
documentation on connection security.

Thanks to Michael Paquier for the Windows fixes.

Author: Robbie Harwood, with changes to the read/write functions by me.
Reviewed in various forms and at different times by: Michael Paquier,
   Andres Freund, David Steele.
Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 15:02:33 -04:00
Peter Eisentraut cc8d415117 Unified logging system for command-line programs
This unifies the various ad hoc logging (message printing, error
printing) systems used throughout the command-line programs.

Features:

- Program name is automatically prefixed.

- Message string does not end with newline.  This removes a common
  source of inconsistencies and omissions.

- Additionally, a final newline is automatically stripped, simplifying
  use of PQerrorMessage() etc., another common source of mistakes.

- I converted error message strings to use %m where possible.

- As a result of the above several points, more translatable message
  strings can be shared between different components and between
  frontends and backend, without gratuitous punctuation or whitespace
  differences.

- There is support for setting a "log level".  This is not meant to be
  user-facing, but can be used internally to implement debug or
  verbose modes.

- Lazy argument evaluation, so no significant overhead if logging at
  some level is disabled.

- Some color in the messages, similar to gcc and clang.  Set
  PG_COLOR=auto to try it out.  Some colors are predefined, but can be
  customized by setting PG_COLORS.

- Common files (common/, fe_utils/, etc.) can handle logging much more
  simply by just using one API without worrying too much about the
  context of the calling program, requiring callbacks, or having to
  pass "progname" around everywhere.

- Some programs called setvbuf() to make sure that stderr is
  unbuffered, even on Windows.  But not all programs did that.  This
  is now done centrally.

Soft goals:

- Reduces vertical space use and visual complexity of error reporting
  in the source code.

- Encourages more deliberate classification of messages.  For example,
  in some cases it wasn't clear without analyzing the surrounding code
  whether a message was meant as an error or just an info.

- Concepts and terms are vaguely aligned with popular logging
  frameworks such as log4j and Python logging.

This is all just about printing stuff out.  Nothing affects program
flow (e.g., fatal exits).  The uses are just too varied to do that.
Some existing code had wrappers that do some kind of print-and-exit,
and I adapted those.

I tried to keep the output mostly the same, but there is a lot of
historical baggage to unwind and special cases to consider, and I
might not always have succeeded.  One significant change is that
pg_rewind used to write all error messages to stdout.  That is now
changed to stderr.

Reviewed-by: Donald Dong <xdong@csumb.edu>
Reviewed-by: Arthur Zakirov <a.zakirov@postgrespro.ru>
Discussion: https://www.postgresql.org/message-id/flat/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com
2019-04-01 20:01:35 +02:00
Alexander Korotkov 0a02e2ae02 GIN support for @@ and @? jsonpath operators
This commit makes existing GIN operator classes jsonb_ops and json_path_ops
support "jsonb @@ jsonpath" and "jsonb @? jsonpath" operators.  Basic idea is
to extract statements of following form out of jsonpath.

 key1.key2. ... .keyN = const

The rest of jsonpath is rechecked from heap.

Catversion is bumped.

Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Nikita Glukhov, Alexander Korotkov
Reviewed-by: Jonathan Katz, Pavel Stehule
2019-04-01 18:08:52 +03:00
Thomas Munro ad308058cc Use FullTransactionId for the transaction stack.
Provide GetTopFullTransactionId() and GetCurrentFullTransactionId().
The intended users of these interfaces are access methods that use
xids for visibility checks but don't want to have to go back and
"freeze" existing references some time later before the 32 bit xid
counter wraps around.

Use a new struct to serialize the transaction state for parallel
query, because FullTransactionId doesn't fit into the previous
serialization scheme very well.

Author: Thomas Munro
Reviewed-by: Heikki Linnakangas
Discussion: https://postgr.es/m/CAA4eK1%2BMv%2Bmb0HFfWM9Srtc6MVe160WFurXV68iAFMcagRZ0dQ%40mail.gmail.com
2019-03-28 18:24:43 +13:00
Thomas Munro 2fc7af5e96 Add basic infrastructure for 64 bit transaction IDs.
Instead of inferring epoch progress from xids and checkpoints,
introduce a 64 bit FullTransactionId type and use it to track xid
generation.  This fixes an unlikely bug where the epoch is reported
incorrectly if the range of active xids wraps around more than once
between checkpoints.

The only user-visible effect of this commit is to correct the epoch
used by txid_current() and txid_status(), also visible with
pg_controldata, in those rare circumstances.  It also creates some
basic infrastructure so that later patches can use 64 bit
transaction IDs in more places.

The new type is a struct that we pass by value, as a form of strong
typedef.  This prevents the sort of accidental confusion between
TransactionId and FullTransactionId that would be possible if we
were to use a plain old uint64.

Author: Thomas Munro
Reported-by: Amit Kapila
Reviewed-by: Andres Freund, Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/CAA4eK1%2BMv%2Bmb0HFfWM9Srtc6MVe160WFurXV68iAFMcagRZ0dQ%40mail.gmail.com
2019-03-28 18:12:20 +13:00
Andres Freund 2a96909a4a tableam: Support for an index build's initial table scan(s).
To support building indexes over tables of different AMs, the scans to
do so need to be routed through the table AM.  While moving a fair
amount of code, nearly all the changes are just moving code to below a
callback.

Currently the range based interface wouldn't make much sense for non
block based table AMs. But that seems aceptable for now.

Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
2019-03-27 19:59:06 -07:00
Andres Freund 558a9165e0 Compute XID horizon for page level index vacuum on primary.
Previously the xid horizon was only computed during WAL replay. That
had two major problems:
1) It relied on knowing what the table pointed to looks like. That was
   easy enough before the introducing of tableam (we knew it had to be
   heap, although some trickery around logging the heap relfilenodes
   was required). But to properly handle table AMs we need
   per-database catalog access to look up the AM handler, which
   recovery doesn't allow.
2) Not knowing the xid horizon also makes it hard to support logical
   decoding on standbys. When on a catalog table, we need to be able
   to conflict with slots that have an xid horizon that's too old. But
   computing the horizon by visiting the heap only works once
   consistency is reached, but we always need to be able to detect
   conflicts.

There's also a secondary problem, in that the current method performs
redundant work on every standby. But that's counterbalanced by
potentially computing the value when not necessary (either because
there's no standby, or because there's no connected backends).

Solve 1) and 2) by moving computation of the xid horizon to the
primary and by involving tableam in the computation of the horizon.

To address the potentially increased overhead, increase the efficiency
of the xid horizon computation for heap by sorting the tids, and
eliminating redundant buffer accesses. When prefetching is available,
additionally perform prefetching of buffers.  As this is more of a
maintenance task, rather than something routinely done in every read
only query, we add an arbitrary 10 to the effective concurrency -
thereby using IO concurrency, when not globally enabled.  That's
possibly not the perfect formula, but seems good enough for now.

Bumps WAL format, as latestRemovedXid is now part of the records, and
the heap's relfilenode isn't anymore.

Author: Andres Freund, Amit Khandekar, Robert Haas
Reviewed-By: Robert Haas
Discussion:
    https://postgr.es/m/20181212204154.nsxf3gzqv3gesl32@alap3.anarazel.de
    https://postgr.es/m/20181214014235.dal5ogljs3bmlq44@alap3.anarazel.de
    https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
2019-03-26 16:52:54 -07:00
Andres Freund 5db6df0c01 tableam: Add tuple_{insert, delete, update, lock} and use.
This adds new, required, table AM callbacks for insert/delete/update
and lock_tuple. To be able to reasonably use those, the EvalPlanQual
mechanism had to be adapted, moving more logic into the AM.

Previously both delete/update/lock call-sites and the EPQ mechanism had
to have awareness of the specific tuple format to be able to fetch the
latest version of a tuple. Obviously that needs to be abstracted
away. To do so, move the logic that find the latest row version into
the AM. lock_tuple has a new flag argument,
TUPLE_LOCK_FLAG_FIND_LAST_VERSION, that forces it to lock the last
version, rather than the current one.  It'd have been possible to do
so via a separate callback as well, but finding the last version
usually also necessitates locking the newest version, making it
sensible to combine the two. This replaces the previous use of
EvalPlanQualFetch().  Additionally HeapTupleUpdated, which previously
signaled either a concurrent update or delete, is now split into two,
to avoid callers needing AM specific knowledge to differentiate.

The move of finding the latest row version into tuple_lock means that
encountering a row concurrently moved into another partition will now
raise an error about "tuple to be locked" rather than "tuple to be
updated/deleted" - which is accurate, as that always happens when
locking rows. While possible slightly less helpful for users, it seems
like an acceptable trade-off.

As part of this commit HTSU_Result has been renamed to TM_Result, and
its members been expanded to differentiated between updating and
deleting. HeapUpdateFailureData has been renamed to TM_FailureData.

The interface to speculative insertion is changed so nodeModifyTable.c
does not have to set the speculative token itself anymore. Instead
there's a version of tuple_insert, tuple_insert_speculative, that
performs the speculative insertion (without requiring a flag to signal
that fact), and the speculative insertion is either made permanent
with table_complete_speculative(succeeded = true) or aborted with
succeeded = false).

Note that multi_insert is not yet routed through tableam, nor is
COPY. Changing multi_insert requires changes to copy.c that are large
enough to better be done separately.

Similarly, although simpler, CREATE TABLE AS and CREATE MATERIALIZED
VIEW are also only going to be adjusted in a later commit.

Author: Andres Freund and Haribabu Kommi
Discussion:
    https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
    https://postgr.es/m/20190313003903.nwvrxi7rw3ywhdel@alap3.anarazel.de
    https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-23 19:55:57 -07:00
Michael Paquier ed308d7837 Add options to enable and disable checksums in pg_checksums
An offline cluster can now work with more modes in pg_checksums:
- --enable enables checksums in a cluster, updating all blocks with a
correct checksum, and updating the control file at the end.
- --disable disables checksums in a cluster, updating only the control
file.
- --check is an extra option able to verify checksums for a cluster, and
the default used if no mode is specified.

When running --enable or --disable, the data folder gets fsync'd for
durability, and then it is followed by a control file update and flush
to keep the operation consistent should the tool be interrupted, killed
or the host unplugged.  If no mode is specified in the options, then
--check is used for compatibility with older versions of pg_checksums
(named pg_verify_checksums in v11 where it was introduced).

Author: Michael Banck, Michael Paquier
Reviewed-by: Fabien Coelho, Magnus Hagander, Sergei Kornilov
Discussion: https://postgr.es/m/20181221201616.GD4974@nighthawk.caipicrew.dd-dns.de
2019-03-23 08:12:55 +09:00
Alexander Korotkov 550b9d26f8 Get rid of jsonpath_gram.h and jsonpath_scanner.h
Jsonpath grammar and scanner are both quite small.  It doesn't worth complexity
to compile them separately.  This commit makes grammar and scanner be compiled
at once.  Therefore, jsonpath_gram.h and jsonpath_gram.h are no longer needed.
This commit also does some reorganization of code in jsonpath_gram.y.

Discussion: https://postgr.es/m/d47b2023-3ecb-5f04-d253-d557547cf74f%402ndQuadrant.com
2019-03-20 11:13:34 +03:00
Alexander Korotkov 72b6460336 Partial implementation of SQL/JSON path language
SQL 2016 standards among other things contains set of SQL/JSON features for
JSON processing inside of relational database.  The core of SQL/JSON is JSON
path language, allowing access parts of JSON documents and make computations
over them.  This commit implements partial support JSON path language as
separate datatype called "jsonpath".  The implementation is partial because
it's lacking datetime support and suppression of numeric errors.  Missing
features will be added later by separate commits.

Support of SQL/JSON features requires implementation of separate nodes, and it
will be considered in subsequent patches.  This commit includes following
set of plain functions, allowing to execute jsonpath over jsonb values:

 * jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]),
 * jsonb_path_match(jsonb, jsonpath[, jsonb, bool]),
 * jsonb_path_query(jsonb, jsonpath[, jsonb, bool]),
 * jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]).
 * jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]).

This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which
are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb,
jsonpath) correspondingly.  These operators will have an index support
(implemented in subsequent patches).

Catversion bumped, to add new functions and operators.

Code was written by Nikita Glukhov and Teodor Sigaev, revised by me.
Documentation was written by Oleg Bartunov and Liudmila Mantrova.  The work
was inspired by Oleg Bartunov.

Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova
Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
2019-03-16 12:16:48 +03:00
Tom Lane a6417078c4 Create a script that can renumber manually-assigned OIDs.
This commit adds a Perl script renumber_oids.pl, which can reassign a
range of manually-assigned OIDs to someplace else by modifying OID
fields of the catalog *.dat files and OID-assigning macros in the
catalog *.h files.

Up to now, we've encouraged new patches that need manually-assigned
OIDs to use OIDs just above the range of existing OIDs.  Predictably,
this leads to patches stepping on each others' toes, as whichever
one gets committed first creates an OID conflict that other patch
author(s) have to resolve manually.  With the availability of
renumber_oids.pl, we can eliminate a lot of this hassle.
The new project policy, therefore, is:

* Encourage new patches to use high OIDs (the documentation suggests
choosing a block of OIDs at random in 8000..9999).

* After feature freeze in each development cycle, run renumber_oids.pl
to move all such OIDs down to lower numbers, thus freeing the high OID
range for the next development cycle.

This plan should greatly reduce the risk of OID collisions between
concurrently-developed patches.  Also, if such a collision happens
anyway, we have the option to resolve it without much effort by doing
an off-schedule OID renumbering to get the first-committed patch out
of the way.  Or a patch author could use renumber_oids.pl to change
their patch's assignments without much pain.

This approach does put a premium on not hard-wiring any OID values
in places where renumber_oids.pl and genbki.pl can't fix them.
Project practice in that respect seems to be pretty good already,
but a follow-on patch will sand down some rough edges.

John Naylor and Tom Lane, per an idea of Peter Geoghegan's

Discussion: https://postgr.es/m/CAH2-WzmMTGMcPuph4OvsO7Ykut0AOCF_i-=eaochT0dd2BN9CQ@mail.gmail.com
2019-03-12 10:50:48 -04:00
Andres Freund c2fe139c20 tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:

1) Heap scans need to be generalized into table scans. Do this by
   introducing TableScanDesc, which will be the "base class" for
   individual AMs. This contains the AM independent fields from
   HeapScanDesc.

   The previous heap_{beginscan,rescan,endscan} et al. have been
   replaced with a table_ version.

   There's no direct replacement for heap_getnext(), as that returned
   a HeapTuple, which is undesirable for a other AMs. Instead there's
   table_scan_getnextslot().  But note that heap_getnext() lives on,
   it's still used widely to access catalog tables.

   This is achieved by new scan_begin, scan_end, scan_rescan,
   scan_getnextslot callbacks.

2) The portion of parallel scans that's shared between backends need
   to be able to do so without the user doing per-AM work. To achieve
   that new parallelscan_{estimate, initialize, reinitialize}
   callbacks are introduced, which operate on a new
   ParallelTableScanDesc, which again can be subclassed by AMs.

   As it is likely that several AMs are going to be block oriented,
   block oriented callbacks that can be shared between such AMs are
   provided and used by heap. table_block_parallelscan_{estimate,
   intiialize, reinitialize} as callbacks, and
   table_block_parallelscan_{nextpage, init} for use in AMs. These
   operate on a ParallelBlockTableScanDesc.

3) Index scans need to be able to access tables to return a tuple, and
   there needs to be state across individual accesses to the heap to
   store state like buffers. That's now handled by introducing a
   sort-of-scan IndexFetchTable, which again is intended to be
   subclassed by individual AMs (for heap IndexFetchHeap).

   The relevant callbacks for an AM are index_fetch_{end, begin,
   reset} to create the necessary state, and index_fetch_tuple to
   retrieve an indexed tuple.  Note that index_fetch_tuple
   implementations need to be smarter than just blindly fetching the
   tuples for AMs that have optimizations similar to heap's HOT - the
   currently alive tuple in the update chain needs to be fetched if
   appropriate.

   Similar to table_scan_getnextslot(), it's undesirable to continue
   to return HeapTuples. Thus index_fetch_heap (might want to rename
   that later) now accepts a slot as an argument. Core code doesn't
   have a lot of call sites performing index scans without going
   through the systable_* API (in contrast to loads of heap_getnext
   calls and working directly with HeapTuples).

   Index scans now store the result of a search in
   IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
   target is not generally a HeapTuple anymore that seems cleaner.

To be able to sensible adapt code to use the above, two further
callbacks have been introduced:

a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
   slots capable of holding a tuple of the AMs
   type. table_slot_callbacks() and table_slot_create() are based
   upon that, but have additional logic to deal with views, foreign
   tables, etc.

   While this change could have been done separately, nearly all the
   call sites that needed to be adapted for the rest of this commit
   also would have been needed to be adapted for
   table_slot_callbacks(), making separation not worthwhile.

b) tuple_satisfies_snapshot checks whether the tuple in a slot is
   currently visible according to a snapshot. That's required as a few
   places now don't have a buffer + HeapTuple around, but a
   slot (which in heap's case internally has that information).

Additionally a few infrastructure changes were needed:

I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
   internally uses a slot to keep track of tuples. While
   systable_getnext() still returns HeapTuples, and will so for the
   foreseeable future, the index API (see 1) above) now only deals with
   slots.

The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.

Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
    https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
    https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 12:46:41 -07:00
Tom Lane a0b7626268 Simplify release-note links to back branches.
Now that https://www.postgresql.org/docs/release/ is populated,
replace the stopgap text we had under "Prior Releases" with
a pointer to that archive.

Discussion: https://postgr.es/m/e0f09c9a-bd2b-862a-d379-601dfabc8969@postgresql.org
2019-03-09 18:42:39 -05:00
Andres Freund 8586bf7ed8 tableam: introduce table AM infrastructure.
This introduces the concept of table access methods, i.e. CREATE
  ACCESS METHOD ... TYPE TABLE and
  CREATE TABLE ... USING (storage-engine).
No table access functionality is delegated to table AMs as of this
commit, that'll be done in following commits.

Subsequent commits will incrementally abstract table access
functionality to be routed through table access methods. That change
is too large to be reviewed & committed at once, so it'll be done
incrementally.

Docs will be updated at the end, as adding them incrementally would
likely make them less coherent, and definitely is a lot more work,
without a lot of benefit.

Table access methods are specified similar to index access methods,
i.e. pg_am.amhandler returns, as INTERNAL, a pointer to a struct with
callbacks. In contrast to index AMs that struct needs to live as long
as a backend, typically that's achieved by just returning a pointer to
a constant struct.

Psql's \d+ now displays a table's access method. That can be disabled
with HIDE_TABLEAM=true, which is mainly useful so regression tests can
be run against different AMs.  It's quite possible that this behaviour
still needs to be fine tuned.

For now it's not allowed to set a table AM for a partitioned table, as
we've not resolved how partitions would inherit that. Disallowing
allows us to introduce, if we decide that's the way forward, such a
behaviour without a compatibility break.

Catversion bumped, to add the heap table AM and references to it.

Author: Haribabu Kommi, Andres Freund, Alvaro Herrera, Dimitri Golgov and others
Discussion:
    https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
    https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
    https://postgr.es/m/20190107235616.6lur25ph22u5u5av@alap3.anarazel.de
    https://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de
2019-03-06 09:54:38 -08:00
Tom Lane 02a6a54ecd Make use of compiler builtins and/or assembly for CLZ, CTZ, POPCNT.
Test for the compiler builtins __builtin_clz, __builtin_ctz, and
__builtin_popcount, and make use of these in preference to
handwritten C code if they're available.  Create src/port
infrastructure for "leftmost one", "rightmost one", and "popcount"
so as to centralize these decisions.

On x86_64, __builtin_popcount generally won't make use of the POPCNT
opcode because that's not universally supported yet.  Provide code
that checks CPUID and then calls POPCNT via asm() if available.
This requires indirecting through a function pointer, which is
an annoying amount of overhead for a one-instruction operation,
but it's probably not worth working harder than this for our
current use-cases.

I'm not sure we've found all the existing places that could profit
from this new infrastructure; but we at least touched all the
ones that used copied-and-pasted versions of the bitmapset.c code,
and got rid of multiple copies of the associated constant arrays.

While at it, replace c-compiler.m4's one-per-builtin-function
macros with a single one that can handle all the cases we need
to worry about so far.  Also, because I'm paranoid, make those
checks into AC_LINK checks rather than just AC_COMPILE; the
former coding failed to verify that libgcc has support for the
builtin, in cases where it's not inline code.

David Rowley, Thomas Munro, Alvaro Herrera, Tom Lane

Discussion: https://postgr.es/m/CAKJS1f9WTAGG1tPeJnD18hiQW5gAk59fQ6WK-vfdAKEHyRg2RA@mail.gmail.com
2019-02-15 23:22:33 -05:00
Alvaro Herrera 457aef0f1f Revert attempts to use POPCNT etc instructions
This reverts commits fc6c72747a, 109de05cbb, d0b4663c23 and
711bab1e4d.

Somebody will have to try harder before submitting this patch again.
I've spent entirely too much time on it already, and the #ifdef maze yet
to be written in order for it to build at all got on my nerves.  The
amount of work needed to get a platform-specific performance improvement
that's barely above the noise level is not worth it.
2019-02-15 16:32:30 -03:00
Alvaro Herrera 711bab1e4d Add basic support for using the POPCNT and SSE4.2s LZCNT opcodes
These opcodes have been around in the AMD world since 2007, and 2008 in
the case of intel.  They're supported in GCC and Clang via some __builtin
macros.  The opcodes may be unavailable during runtime, in which case we
fall back on a C-based implementation of the code.  In order to get the
POPCNT instruction we must pass the -mpopcnt option to the compiler.  We
do this only for the pg_bitutils.c file.

David Rowley (with fragments taken from a patch by Thomas Munro)

Discussion: https://postgr.es/m/CAKJS1f9WTAGG1tPeJnD18hiQW5gAk59fQ6WK-vfdAKEHyRg2RA@mail.gmail.com
2019-02-13 16:10:06 -03:00
Andrew Gierth 02ddd49932 Change floating-point output format for improved performance.
Previously, floating-point output was done by rounding to a specific
decimal precision; by default, to 6 or 15 decimal digits (losing
information) or as requested using extra_float_digits. Drivers that
wanted exact float values, and applications like pg_dump that must
preserve values exactly, set extra_float_digits=3 (or sometimes 2 for
historical reasons, though this isn't enough for float4).

Unfortunately, decimal rounded output is slow enough to become a
noticable bottleneck when dealing with large result sets or COPY of
large tables when many floating-point values are involved.

Floating-point output can be done much faster when the output is not
rounded to a specific decimal length, but rather is chosen as the
shortest decimal representation that is closer to the original float
value than to any other value representable in the same precision. The
recently published Ryu algorithm by Ulf Adams is both relatively
simple and remarkably fast.

Accordingly, change float4out/float8out to output shortest decimal
representations if extra_float_digits is greater than 0, and make that
the new default. Applications that need rounded output can set
extra_float_digits back to 0 or below, and take the resulting
performance hit.

We make one concession to portability for systems with buggy
floating-point input: we do not output decimal values that fall
exactly halfway between adjacent representable binary values (which
would rely on the reader doing round-to-nearest-even correctly). This
is known to be a problem at least for VS2013 on Windows.

Our version of the Ryu code originates from
https://github.com/ulfjack/ryu/ at commit c9c3fb1979, but with the
following (significant) modifications:

 - Output format is changed to use fixed-point notation for small
   exponents, as printf would, and also to use lowercase 'e', a
   minimum of 2 exponent digits, and a mandatory sign on the exponent,
   to keep the formatting as close as possible to previous output.

 - The output of exact midpoint values is disabled as noted above.

 - The integer fast-path code is changed somewhat (since we have
   fixed-point output and the upstream did not).

 - Our project style has been largely applied to the code with the
   exception of C99 declaration-after-statement, which has been
   retained as an exception to our present policy.

 - Most of upstream's debugging and conditionals are removed, and we
   use our own configure tests to determine things like uint128
   availability.

Changing the float output format obviously affects a number of
regression tests. This patch uses an explicit setting of
extra_float_digits=0 for test output that is not expected to be
exactly reproducible (e.g. due to numerical instability or differing
algorithms for transcendental functions).

Conversions from floats to numeric are unchanged by this patch. These
may appear in index expressions and it is not yet clear whether any
change should be made, so that can be left for another day.

This patch assumes that the only supported floating point format is
now IEEE format, and the documentation is updated to reflect that.

Code by me, adapting the work of Ulf Adams and other contributors.

References:
https://dl.acm.org/citation.cfm?id=3192369

Reviewed-by: Tom Lane, Andres Freund, Donald Dong
Discussion: https://postgr.es/m/87r2el1bx6.fsf@news-spur.riddles.org.uk
2019-02-13 15:20:33 +00:00
Andrew Gierth f397e08599 Use strtof() and not strtod() for float4 input.
Using strtod() creates a double-rounding problem; the input decimal
value is first rounded to the nearest double; rounding that to the
nearest float may then give an incorrect result.

An example is that 7.038531e-26 when input via strtod and then rounded
to float4 gives 0xAE43FEp-107 instead of the correct 0xAE43FDp-107.

Values output by earlier PG versions with extra_float_digits=3 should
all be read in with the same values as previously. However, values
supplied by other software using shortest representations could be
mis-read.

On platforms that lack a strtof() entirely, we fall back to the old
incorrect rounding behavior. (As strtof() is required by C99, such
platforms are considered of primarily historical interest.) On VS2013,
some workarounds are used to get correct error handling.

The regression tests now test for the correct input values, so
platforms that lack strtof() will need resultmap entries. An entry for
HP-UX 10 is included (more may be needed).

Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/871s5emitx.fsf@news-spur.riddles.org.uk
Discussion: https://postgr.es/m/87d0owlqpv.fsf@news-spur.riddles.org.uk
2019-02-13 15:19:44 +00:00
Alvaro Herrera d357a16997 Blind attempt at fixing Windows build
Broken since fe33a196de.
2019-02-12 18:29:26 -03:00
Tom Lane 1fb57af920 Create the infrastructure for planner support functions.
Rename/repurpose pg_proc.protransform as "prosupport".  The idea is
still that it names an internal function that provides knowledge to
the planner about the behavior of the function it's attached to;
but redesign the API specification so that it's not limited to doing
just one thing, but can support an extensible set of requests.

The original purpose of simplifying a function call is handled by
the first request type to be invented, SupportRequestSimplify.
Adjust all the existing transform functions to handle this API,
and rename them fron "xxx_transform" to "xxx_support" to reflect
the potential generalization of what they do.  (Since we never
previously provided any way for extensions to add transform functions,
this change doesn't create an API break for them.)

Also add DDL and pg_dump support for attaching a support function to a
user-defined function.  Unfortunately, DDL access has to be restricted
to superusers, at least for now; but seeing that support functions
will pretty much have to be written in C, that limitation is just
theoretical.  (This support is untested in this patch, but a follow-on
patch will add cases that exercise it.)

Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
2019-02-09 18:08:48 -05:00
Andrew Dunstan 51b025933d Fix perl searchpath for gen_keywordlist.pl
as found by running src/tools/perlcheck/pgperlsyncheck
2019-02-07 11:14:29 -05:00
Andrew Dunstan 592123efbb Unify searchpath and do file logic in MSVC build scripts.
Commit f83419b739 failed to notice that mkvcbuild.pl and build.pl use
different searchpath and do-file logic, breaking the latter, so it is
adjusted to use the same logic as mkvcbuild.pl.
2019-02-06 07:36:02 -05:00
Andrew Dunstan f83419b739 Fix included file path for modern perl
Contrary to the comment on 772d4b76, only paths starting with "./" or
"../" are considered relative to the current working directory by perl's
"do" function. So this patch converts all the relevant cases to use "./"
paths. This only affects MSVC.

Backpatch to all live branches.
2019-02-05 19:27:47 -05:00
Tom Lane 527b5ed1ad Doc: in each release branch, keep only that branch's own release notes.
Historically we've had each release branch include all prior branches'
notes, including minor-release changes, back to the beginning of the
project.  That's basically an O(N^2) proposition, and it was starting to
catch up with us: as of HEAD the back-branch release notes alone accounted
for nearly 30% of the documentation.  While there's certainly some value
in easy access to back-branch notes, this is getting out of hand.

Hence, switch over to the rule that each branch contains only its own
release notes.  So as to not make older notes too hard to find, each
branch will provide URLs for the immediately preceding branches'
release notes on the project website.

There might be value in providing aggregated notes across all branches
somewhere on the website, but that's a task for another day.

Discussion: https://postgr.es/m/cbd4aeb5-2d9c-8b84-e968-9e09393d4c83@postgresql.org
2019-02-04 19:18:49 -05:00
Andres Freund a9c35cf85c Change function call information to be variable length.
Before this change FunctionCallInfoData, the struct arguments etc for
V1 function calls are stored in, always had space for
FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two
arrays.  For nearly every function call 100 arguments is far more than
needed, therefore wasting memory. Arg and argnull being two separate
arrays also guarantees that to access a single argument, two
cachelines have to be touched.

Change the layout so there's a single variable-length array with pairs
of value / isnull. That drastically reduces memory consumption for
most function calls (on x86-64 a two argument function now uses
64bytes, previously 936 bytes), and makes it very likely that argument
value and its nullness are on the same cacheline.

Arguments are stored in a new NullableDatum struct, which, due to
padding, needs more memory per argument than before. But as usually
far fewer arguments are stored, and individual arguments are cheaper
to access, that's still a clear win.  It's likely that there's other
places where conversion to NullableDatum arrays would make sense,
e.g. TupleTableSlots, but that's for another commit.

Because the function call information is now variable-length
allocations have to take the number of arguments into account. For
heap allocations that can be done with SizeForFunctionCallInfoData(),
for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro
that helps to allocate an appropriately sized and aligned variable.

Some places with stack allocation function call information don't know
the number of arguments at compile time, and currently variably sized
stack allocations aren't allowed in postgres. Therefore allow for
FUNC_MAX_ARGS space in these cases. They're not that common, so for
now that seems acceptable.

Because of the need to allocate FunctionCallInfo of the appropriate
size, older extensions may need to update their code. To avoid subtle
breakages, the FunctionCallInfoData struct has been renamed to
FunctionCallInfoBaseData. Most code only references FunctionCallInfo,
so that shouldn't cause much collateral damage.

This change is also a prerequisite for more efficient expression JIT
compilation (by allocating the function call information on the stack,
allowing LLVM to optimize it away); previously the size of the call
information caused problems inside LLVM's optimizer.

Author: Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
2019-01-26 14:17:52 -08:00
Andres Freund 63746189b2 Change snapshot type to be determined by enum rather than callback.
This is in preparation for allowing the same snapshot be used for
different table AMs. With the current callback based approach we would
need one callback for each supported AM, which clearly would not be
extensible.  Thus add a new Snapshot->snapshot_type field, and move
the dispatch into HeapTupleSatisfiesVisibility() (which is now a
function). Later work will then dispatch calls to
HeapTupleSatisfiesVisibility() and other AMs visibility functions
depending on the type of the table.  The central SnapshotType enum
also seems like a good location to centralize documentation about the
intended behaviour of various types of snapshots.

As tqual.h isn't included by bufmgr.h any more (as HeapTupleSatisfies*
isn't referenced by TestForOldSnapshot() anymore) a few files now need
to include it directly.

Author: Andres Freund, loosely based on earlier work by Haribabu Kommi
Discussion:
    https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
    https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-01-21 17:03:15 -08:00
Tomas Vondra d6ef7fe75c Revert "Add valgrind suppressions for wcsrtombs optimizations"
This reverts commit d3bbc4b96a.

Per discussion, it's not desirable to add valgrind suppressions for
outside our own code base (e.g. glibc in this case), especially when
the suppressions may be platform-specific. There are better ways to
deal with that, e.g. by providing local suppressions.

Discussion: https://www.postgresql.org/message-id/flat/90ac0452-e907-e7a4-b3c8-15bd33780e62%402ndquadrant.com
2019-01-19 20:49:51 +01:00
Magnus Hagander 0301db623d Replace @postgresql.org with @lists.postgresql.org for mailinglists
Commit c0d0e54084 replaced the ones in the documentation, but missed out
on the ones in the code. Replace those as well, but unlike c0d0e54084,
don't backpatch the code changes to avoid breaking translations.
2019-01-19 19:06:35 +01:00
Noah Misch 472e1e4cf6 Make Emacs perl-mode indent more like perltidy.
This especially helps braces that surround code blocks.  Back-patch to
v11, where commit 56fb890ace first
appeared; before that, settings were even more distant from perltidy.

Reviewed by Andrew Dunstan.

Discussion: https://postgr.es/m/20190103055355.GB267595@gust.leadboat.com
2019-01-13 11:32:31 -08:00
Tom Lane 52a301e0d9 Improve missing-program error handling in make_ctags and make_etags.
If ctags (resp. etags) isn't installed, these scripts naturally fail,
but the error messages were less clear than one could wish.
It seems worth installing an explicit test to improve that.

Nikolay Shaplov, with suggestions from Michael Paquier and Andrew Dunstan

Discussion: https://postgr.es/m/2394207.ccz7JgCJsh@x200m
2019-01-13 13:33:50 -05:00
Tom Lane c64d0cd5ce Use perfect hashing, instead of binary search, for keyword lookup.
We've been speculating for a long time that hash-based keyword lookup
ought to be faster than binary search, but up to now we hadn't found
a suitable tool for generating the hash function.  Joerg Sonnenberger
provided the inspiration, and sample code, to show us that rolling our
own generator wasn't a ridiculous idea.  Hence, do that.

The method used here requires a lookup table of approximately 4 bytes
per keyword, but that's less than what we saved in the predecessor commit
afb0d0712, so it's not a big problem.  The time savings is indeed
significant: preliminary testing suggests that the total time for raw
parsing (flex + bison phases) drops by ~20%.

Patch by me, but it owes its existence to Joerg Sonnenberger;
thanks also to John Naylor for review.

Discussion: https://postgr.es/m/20190103163340.GA15803@britannica.bec.de
2019-01-09 19:47:46 -05:00
Tom Lane afb0d0712f Replace the data structure used for keyword lookup.
Previously, ScanKeywordLookup was passed an array of string pointers.
This had some performance deficiencies: the strings themselves might
be scattered all over the place depending on the compiler (and some
quick checking shows that at least with gcc-on-Linux, they indeed
weren't reliably close together).  That led to very cache-unfriendly
behavior as the binary search touched strings in many different pages.
Also, depending on the platform, the string pointers might need to
be adjusted at program start, so that they couldn't be simple constant
data.  And the ScanKeyword struct had been designed with an eye to
32-bit machines originally; on 64-bit it requires 16 bytes per
keyword, making it even more cache-unfriendly.

Redesign so that the keyword strings themselves are allocated
consecutively (as part of one big char-string constant), thereby
eliminating the touch-lots-of-unrelated-pages syndrome.  And get
rid of the ScanKeyword array in favor of three separate arrays:
uint16 offsets into the keyword array, uint16 token codes, and
uint8 keyword categories.  That reduces the overhead per keyword
to 5 bytes instead of 16 (even less in programs that only need
one of the token codes and categories); moreover, the binary search
only touches the offsets array, further reducing its cache footprint.
This also lets us put the token codes somewhere else than the
keyword strings are, which avoids some unpleasant build dependencies.

While we're at it, wrap the data used by ScanKeywordLookup into
a struct that can be treated as an opaque type by most callers.
That doesn't change things much right now, but it will make it
less painful to switch to a hash-based lookup method, as is being
discussed in the mailing list thread.

Most of the change here is associated with adding a generator
script that can build the new data structure from the same
list-of-PG_KEYWORD header representation we used before.
The PG_KEYWORD lists that plpgsql and ecpg used to embed in
their scanner .c files have to be moved into headers, and the
Makefiles have to be taught to invoke the generator script.
This work is also necessary if we're to consider hash-based lookup,
since the generator script is what would be responsible for
constructing a hash table.

Aside from saving a few kilobytes in each program that includes
the keyword table, this seems to speed up raw parsing (flex+bison)
by a few percent.  So it's worth doing even as it stands, though
we think we can gain even more with a follow-on patch to switch
to hash-based lookup.

John Naylor, with further hacking by me

Discussion: https://postgr.es/m/CAJVSVGXdFVU2sgym89XPL=Lv1zOS5=EHHQ8XWNzFL=mTXkKMLw@mail.gmail.com
2019-01-06 17:02:57 -05:00
Tom Lane d33faa285b Move the built-in conversions into the initial catalog data.
Instead of running a SQL script to create the standard conversion
functions and pg_conversion entries, put those entries into the
initial data in postgres.bki.

This shaves a few percent off the runtime of initdb, and also allows
accurate comments to be attached to the conversion functions; the
previous script labeled them with machine-generated comments that
were not quite right for multi-purpose conversion functions.
Also, we can get rid of the duplicative Makefile and MSVC perl
implementations of the generation code for that SQL script.

A functional change is that these pg_proc and pg_conversion entries
are now "pinned" by initdb.  Leaving them unpinned was perhaps a
good thing back while the conversions feature was under development,
but there seems no valid reason for it now.

Also, the conversion functions are now marked as immutable, where
before they were volatile by virtue of lacking any explicit
specification.  That seems like it was just an oversight.

To avoid using magic constants in pg_conversion.dat, extend
genbki.pl to allow encoding names to be converted, much as it
does for language, access method, etc names.

John Naylor

Discussion: https://postgr.es/m/CAJVSVGWtUqxpfAaxS88vEGvi+jKzWZb2EStu5io-UPc4p9rSJg@mail.gmail.com
2019-01-03 19:47:53 -05:00
Tom Lane 814c9019aa Use symbolic references for pg_language OIDs in the bootstrap data.
This patch teaches genbki.pl to replace pg_language names by OIDs
in much the same way as it already does for pg_am names etc, and
converts pg_proc.dat to use such symbolic references in the prolang
column.

Aside from getting rid of a few more magic numbers in the initial
catalog data, this means that Gen_fmgrtab.pl no longer needs to read
pg_language.dat, since it doesn't have to know the OID of the "internal"
language; now it's just looking for the string "internal".

No need for a catversion bump, since the contents of postgres.bki
don't actually change at all.

John Naylor

Discussion: https://postgr.es/m/CAJVSVGWtUqxpfAaxS88vEGvi+jKzWZb2EStu5io-UPc4p9rSJg@mail.gmail.com
2019-01-03 18:38:49 -05:00
Bruce Momjian 97c39498e5 Update copyright for 2019
Backpatch-through: certain files through 9.4
2019-01-02 12:44:25 -05:00
Andres Freund 09568ec3d3 Create a separate oid range for oids assigned by genbki.pl.
The changes I made in 578b229718 assigned oids below
FirstBootstrapObjectId to objects in include/catalog/*.dat files that
did not have an oid assigned, starting at the max oid explicitly
assigned.  Tom criticized that for mainly two reasons:
1) It's not clear which values are manually and which explicitly
   assigned.
2) The space below FirstBootstrapObjectId gets pretty crowded, and
   some PostgreSQL forks have used oids >= 9000 for their own objects,
   to avoid conflicting.

Thus create a new range for objects not assigned explicit oids, but
assigned by genbki.pl. For now 1-9999 is for explicitly assigned oids,
FirstGenbkiObjectId (10000) to FirstBootstrapObjectId (1200) -1 is for
genbki.pl assigned oids, and < FirstNormalObjectId (16384) is for oids
assigned during bootstrap.  It's possible that we'll have to adjust
these boundaries, but there's some headroom for now.

Add a note suggesting that oids in forks should be assigned in the
9000-9999 range.

Catversion bump for obvious reasons.

Per complaint from Tom Lane.

Author: Andres Freund
Discussion: https://postgr.es/m/16845.1544393682@sss.pgh.pa.us
2018-12-13 14:50:57 -08:00
Tom Lane 29180e5d78 Eliminate parallel-make hazard in ecpg/preproc.
Re-making ecpglib's typename.o is dangerous because another make thread
could be doing that at the same time.  While we've not heard field
complaints traceable to this, it seems inevitable that it'd bite someone
eventually.  Instead, symlink typename.c into the preproc directory and
recompile it there.  That file is small enough that compiling it twice
isn't much of a penalty.  Furthermore, this way we get a .o file that's
made without shlib CFLAGS, which seems cleaner.

This requires adding more stuff to the module's -I list.  The MSVC
aspect of that is untested, but I'm sure the buildfarm will tell me
if I got it wrong.

Per a suggestion from Peter Eisentraut.  Although this is theoretically
a bug fix, the lack of field reports makes me feel we needn't back-patch.

Discussion: https://postgr.es/m/31364.1543511708@sss.pgh.pa.us
2018-12-01 17:19:51 -05:00
Michael Paquier 431f1599a2 Add support for NO_INSTALLCHECK in MSVC scripts
When fetching a list of tests for a given extension in contrib/ or
src/test/modules/, NO_INSTALLCHECK now gets checked first.  If present,
an empty list of tests is returned to let the caller know that tests
for this module need to be bypassed.

This actually fixes a set of issues with MSVC with modules using
REGRESS_OPTS, as an incorrect parsing caused the launched command
to eat the first test listed.  The actual effect on the tree is that
several modules listed a single test, so regressions have been running
with no actual tests.  pg_stat_statements, test_rls_hooks and commit_ts
were impacted by that.  Some other modules like test_decoding (or
snapshot_too_old) don't use yet PGXS rules, but their makefiles will
soon be refactored with an upcoming patch.

Author: Michael Paquier
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/20181126054302.GI1776@paquier.xyz
2018-11-29 10:31:12 +09:00
Andres Freund 578b229718 Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.

This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row.  Neither pg_dump nor COPY included the contents of the
oid column by default.

The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.

WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.

Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
  WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
  issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
  restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
  OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
  plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.

The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.

The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such.  This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.

The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.

Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).

The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.

While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.

Catversion bump, for obvious reasons.

Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-20 16:00:17 -08:00
Tomas Vondra d3bbc4b96a Add valgrind suppressions for wcsrtombs optimizations
wcsrtombs (called through wchar2char from common functions like lower,
upper, etc.) uses various optimizations that may look like access to
uninitialized data, triggering valgrind reports.

For example AVX2 instructions load data in 256-bit chunks, and  gconv
does something similar with 32-bit chunks.  This is faster than accessing
the bytes one by one, and the uninitialized part of the buffer is not
actually used. So suppress the bogus reports.

The exact stack depends on possible optimizations - it might be AVX, SSE
(as in the report by Aleksander Alekseev) or something else. Hence the
last frame is wildcarded, to deal with this.

Backpatch all the way back to 9.4.

Author: Tomas Vondra
Discussion: https://www.postgresql.org/message-id/flat/90ac0452-e907-e7a4-b3c8-15bd33780e62%402ndquadrant.com
Discussion: https://www.postgresql.org/message-id/20180220150838.GD18315@e733.localdomain
2018-11-17 23:50:21 +01:00
Andrew Dunstan 12d5f39b15 Adjust valgrind fix in commit 517b0d0b5f
lousyjack still wasn't happy. I have tested this modification and it
worked.
2018-11-08 08:38:46 -05:00
Andrew Dunstan 54ad7282fe Use parallel installcheck in vcregress.pl's upgrade test
This is to keep the test in sync with what's done in test.sh, which
acquired this change in commit da906766c.
2018-11-07 14:17:57 -05:00
Andrew Dunstan 517b0d0b5f Quiet valgrind complaints following pread/pwrite changes
Per complaints from buildfarm and elsewhere
Patch from Jasper Pedersen
Discussion: https://postgr.es/m/0f419c91-49ab-2399-0143-13063bd97c46@redhat.com
2018-11-07 12:58:39 -05:00
Thomas Munro 3fd2a7932e Provide pg_pread() and pg_pwrite() for random I/O.
Forward to POSIX pread() and pwrite(), or emulate them if unavailable.
The emulation is not perfect as the file position is changed, so
we'll put pg_ prefixes on the names to minimize the risk of confusion
in future patches that might inadvertently try to mix pread() and read()
on the same file descriptor.

Author: Thomas Munro
Reviewed-by: Tom Lane, Jesper Pedersen
Discussion: https://postgr.es/m/CAEepm=02rapCpPR3ZGF2vW=SBHSdFYO_bz_f-wwWJonmA3APgw@mail.gmail.com
2018-11-07 09:50:01 +13:00
Magnus Hagander fbec7459aa Fix spelling errors and typos in comments
Author: Daniel Gustafsson <daniel@yesql.se>
2018-11-02 13:56:52 +01:00
Peter Eisentraut a9e5f8e781 Exclude temporary directories from pgindent
Exclude tmp_check and tmp_install from pgindent.  In a fully-built
tree, pgindent would spend a lot of time digging through these
directories and ends up re-indenting installed header files.
2018-10-29 11:39:44 +01:00
Andrew Dunstan 1df92eeafe Fix perl searchpath for modern perl for MSVC tools
Modern versions of perl no longer include the current directory in the
perl searchpath, as it's insecure. Instead of adding the current
directory, we get around the problem by adding the directory where the
script lives.

Problem noted by Victor Wagner.

Solution adapted from buildfarm client code.

Backpatch to all live versions.
2018-10-28 12:22:32 -04:00
Andres Freund 2d10defa77 Remove timetravel extension.
The extension depended on old types which are about to be removed. As
the code additionally was pretty crufty and didn't provide much in the
way of functionality, removing the extension seems to be the best way
forward.  It's fairly trivial to write functionality in plpgsql that
more than covers what timetravel did.

Author: Andres Freund
Discussion:
    https://postgr.es/m/20171213080506.cwjkpcz3bkk6yz2u@alap3.anarazel.de
    https://postgr.es/m/25615.1513115237@sss.pgh.pa.us
2018-10-11 11:43:56 -07:00
Tom Lane 61f14cc8c8 Tweak MSVC build system to match changes in 7143b3e82.
Looks like we need to pull in $libpgcommon in a couple more
places than before.

Per buildfarm.
2018-09-28 15:49:05 -04:00
Tom Lane 97c6852ff7 Tweak MSVC build system to match changes in 7143b3e82.
Also try to make the comment suggesting that this might be needed
more intelligible.

Per buildfarm.
2018-09-28 15:17:07 -04:00
Tom Lane 26e9d4d4ef Convert elog.c's useful_strerror() into a globally-used strerror wrapper.
elog.c has long had a private strerror wrapper that handles assorted
possible failures or deficiencies of the platform's strerror.  On Windows,
it also knows how to translate Winsock error codes, which the native
strerror does not.  Move all this code into src/port/strerror.c and
define strerror() as a macro that invokes it, so that both our frontend
and backend code will have all of this behavior.

I believe this constitutes an actual bug fix on Windows, since AFAICS
our frontend code did not report Winsock error codes properly before this.
However, the main point is to lay the groundwork for implementing %m
in src/port/snprintf.c: the behavior we want %m to have is this one,
not the native strerror's.

Note that this throws away the prior use of src/port/strerror.c,
which was to implement strerror() on platforms lacking it.  That's
been dead code for nigh twenty years now, since strerror() was
already required by C89.

We should likewise cause strerror_r to use this behavior, but
I'll tackle that separately.

Patch by me, reviewed by Michael Paquier

Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
2018-09-26 11:06:42 -04:00
Tom Lane b09a64d602 Add missing pg_description strings for pg_type entries.
I noticed that all non-composite, non-array entries in pg_type.dat
had descr strings, except for "json" and the pseudo-types.  The
lack for json seems certainly an oversight, and there's surely
little reason to not have entries for the pseudo-types either.
So add some.

"make reformat-dat-files" turned up some formatting issues in
pg_amop.dat, too, so fix those in passing.

No catversion bump since the backend doesn't care too much what is
in pg_description.
2018-09-20 16:06:18 -04:00
Andrew Gierth 60f6756f92 Fix out-of-tree build for transform modules.
Neither plperl nor plpython installed sufficient header files to
permit transform modules to be built out-of-tree using PGXS. Fix that
by installing all plperl and plpython header files (other than those
with special purposes such as generated data tables), and also install
plpython's special .mk file for mangling regression tests.

(This commit does not fix the windows install, which does not
currently install _any_ plperl or plpython headers.)

Also fix the existing transform modules for hstore and ltree so that
their cross-module #include directives work as anticipated by commit
df163230b9 et seq. This allows them to serve as working examples of
how to reference other modules when doing separate out-of-tree builds.

Discussion: https://postgr.es/m/87o9ej8bgl.fsf%40news-spur.riddles.org.uk
2018-09-16 18:46:45 +01:00
Tom Lane ed0cdf0e05 Install a check for mis-linking of src/port and src/common functions.
On ELF-based platforms (and maybe others?) it's possible for a shared
library, when dynamically loaded into the backend, to call the backend
versions of src/port and src/common functions rather than the frontend
versions that are actually linked into the shlib.  This is definitely
not what we want, because the frontend versions often behave slightly
differently.  Up to now it's been "slight" enough that nobody noticed;
but with the addition of SCRAM support functions in src/common, we're
observing crashes due to the difference between palloc and malloc
memory allocation rules, as reported in bug #15367 from Jeremy Evans.

The purpose of this patch is to create a direct test for this type of
mis-linking, so that we know whether any given platform requires extra
measures to prevent using the wrong functions.  If the test fails, it
will lead to connection failures in the contrib/postgres_fdw regression
test.  At the moment, *BSD platforms using ELF format are known to have
the problem and can be expected to fail; but we need to know whether
anything else does, and we need a reliable ongoing check for future
platforms.

Actually fixing the problem will be the subject of later commit(s).

Discussion: https://postgr.es/m/153626613985.23143.4743626885618266803@wrigleys.postgresql.org
2018-09-09 12:23:23 -04:00
Peter Eisentraut 842cb9fa62 Refactor dlopen() support
Nowadays, all platforms except Windows and older HP-UX have standard
dlopen() support.  So having a separate implementation per platform
under src/backend/port/dynloader/ is a bit excessive.  Instead, treat
dlopen() like other library functions that happen to be missing
sometimes and put a replacement implementation under src/port/.

Discussion: https://www.postgresql.org/message-id/flat/e11a49cb-570a-60b7-707d-7084c8de0e61%402ndquadrant.com#54e735ae37476a121abb4e33c2549b03
2018-09-06 11:33:04 +02:00
Andres Freund d9dd406fe2 Require C99 (and thus MSCV 2013 upwards).
In 86d78ef50e I enabled configure to check for C99 support, with the
goal of checking which platforms support C99.  While there are a few
machines without C99 support among our buildfarm animals,
de-supporting them for v12 was deemed acceptable.

While not tested in aforementioned commit, the biggest increase in
minimum compiler version comes from MSVC, which gained C99 support
fairly late. The subset in MSVC 2013 is sufficient for our needs, at
this point. While that is a significant increase in minimum version,
the existing windows binaries are already built with a new enough
version.

Make configure error out if C99 support could not be detected. For
MSVC builds, increase the minimum version to 2013.

The increase to MSVC 2013 allows us to get rid of VCBuildProject.pm,
as that was only required for MSVC 2005/2008.

Author: Andres Freund
Discussion: https://postgr.es/m/97d4b165-192d-3605-749c-f614a0c4e783@2ndquadrant.com
2018-08-23 18:33:57 -07:00
Noah Misch f3efef434f MSVC: Finish clean.bat tmp_check coverage.
Use wildcards, so one can add a TAP test suite without updating this
file.  Back-patch to v11, which omitted multiple new suites.
2018-08-19 01:12:22 -07:00
Noah Misch a53f0edd64 MSVC: Remove any tmp_check directory before running a TAP test suite.
Back-patch to v11, where commit 90627cf98a
made the GNU make build system do likewise.  Without this, when a
typical PostgresNode-using test failed, subsequent runs bailed out with
a "File exists" error.
2018-08-19 01:12:22 -07:00
Andrew Gierth df163230b9 Provide for contrib and pgxs modules to install include files.
This allows out-of-tree PLs and similar code to get access to
definitions needed to work with extension data types.

The following existing modules now install headers: contrib/cube,
contrib/hstore, contrib/isn, contrib/ltree, contrib/seg.

Discussion: https://postgr.es/m/87y3euomjh.fsf%40news-spur.riddles.org.uk
2018-07-31 20:07:39 +01:00
Bruce Momjian e2c0df7828 pgtest: run clean, build, and check stages separately
This allows for cleaner error reporting.

Backpatch-through: 9.5
2018-07-28 15:34:06 -04:00
Bruce Momjian 96313bff29 pgtest: grab possible warnings from install.log
Since PG 9.5, 'make check' records the build output in install.log, so
look in there for warnings too.

Backpatch-through: 9.5
2018-07-28 11:35:53 -04:00
Michael Paquier 9ebe0572ce Refactor cluster_rel() to handle more options
This extends cluster_rel() in such a way that more options can be added
in the future, which will reduce the amount of chunk code for an
upcoming SKIP_LOCKED aimed for VACUUM.  As VACUUM FULL is a different
flavor of CLUSTER, we want to make that extensible to ease integration.

This only reworks the API and its callers, without providing anything
user-facing.  Two options are present now: verbose mode and relation
recheck when doing the cluster command work across multiple
transactions.  This could be used as well as a base to extend the
grammar of CLUSTER later on.

Author: Michael Paquier
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/20180723031058.GE2854@paquier.xyz
2018-07-24 11:37:32 +09:00
Robert Haas 32df1c9afa Add subtransaction handling for table synchronization workers.
Since the old logic was completely unaware of subtransactions, a
change made in a subsequently-aborted subtransaction would still cause
workers to be stopped at toplevel transaction commit.  Fix that by
managing a stack of worker lists rather than just one.

Amit Khandekar and Robert Haas

Discussion: http://postgr.es/m/CAJ3gD9eaG_mWqiOTA2LfAug-VRNn1hrhf50Xi1YroxL37QkZNg@mail.gmail.com
2018-07-16 17:33:22 -04:00
Peter Eisentraut 333224c99e Update documentation editor setup instructions
Now that the documentation sources are in XML rather than SGML, some of
the documentation about the editor, or more specifically Emacs, setup
needs updating.  The updated instructions recommend using nxml-mode,
which works mostly out of the box, with some small tweaks in
emacs.samples and .dir-locals.el.

Also remove some obsolete stuff in .dir-locals.el.  I did, however,
leave the sgml-mode settings in there so that someone using Emacs
without emacs.samples gets those settings when editing a *.sgml file.
2018-07-13 21:23:41 +02:00
Peter Eisentraut 1f4ec89459 Remove obsolete documentation build tools for Windows
The scripts and instructions have been nonfunctional at least since
PostgreSQL 10 (commit 510074f9f0) and
nobody has stepped up to fix them.  So right now just remove them until
someone wants to resurrect them.

Discussion: https://www.postgresql.org/message-id/flat/B74C0219-6BA9-46E1-A524-5B9E8CD3BDB3%40yesql.se
2018-07-13 10:01:04 +02:00
Andrew Dunstan 56b4da8c9d Use more modern instructions for creating a new dev cycle 2018-07-01 07:55:05 -04:00
Andrew Dunstan feced1387f Stamp HEAD as 12devel
Let the hacking begin ...
2018-06-30 12:47:59 -04:00
Andrew Dunstan d842139099 perltidy run prior to branching 2018-06-30 12:28:55 -04:00
Andrew Dunstan 2c64d20048 Update typedefs list 2018-06-30 12:07:27 -04:00
Peter Eisentraut 0fcf5e0e6e Fix whitespace 2018-06-27 08:03:54 +02:00
Andrew Dunstan e3eb8be77e Exclude files in .git from list of perl files
The .git directory might contain perl files, as hooks, for example.
Since we have no control over these they should be excluded from things
like our perlcritic checks.

Per offline report from Mike Blackwell.
2018-06-12 14:54:43 -04:00
Andrew Dunstan 85dd744a70 Move perlcritic files to new perlcheck directory 2018-06-11 14:54:28 -04:00
Andrew Dunstan af616ce483 Add a script to detect perl compile time errors and warnings
Also add a function that centralizes the logic for locating all our perl
files and use it in pgperlcritic and pgperltidy as well as the new
pgperlcheck.
2018-06-11 14:47:20 -04:00