Commit Graph

4022 Commits

Author SHA1 Message Date
Andrew Gierth a40631a920 Fix lexing of standard multi-character operators in edge cases.
Commits c6b3c939b (which fixed the precedence of >=, <=, <> operators)
and 865f14a2d (which added support for the standard => notation for
named arguments) created a class of lexer tokens which look like
multi-character operators but which have their own token IDs distinct
from Op. However, longest-match rules meant that following any of
these tokens with another operator character, as in (1<>-1), would
cause them to be incorrectly returned as Op.

The error here isn't immediately obvious, because the parser would
usually still find the correct operator via the Op token, but there
were more subtle problems:

1. If immediately followed by a comment or +-, >= <= <> would be given
   the old precedence of Op rather than the correct new precedence;

2. If followed by a comment, != would be returned as Op rather than as
   NOT_EQUAL, causing it not to be found at all;

3. If followed by a comment or +-, the => token for named arguments
   would be lexed as Op, causing the argument to be mis-parsed as a
   simple expression, usually causing an error.

Fix by explicitly checking for the operators in the {operator} code
block in addition to all the existing special cases there.

Backpatch to 9.5 where the problem was introduced.

Analysis and patch by me; review by Tom Lane.
Discussion: https://postgr.es/m/87va851ppl.fsf@news-spur.riddles.org.uk
2018-08-23 21:42:40 +01:00
Andrew Gierth d4a63f8297 Reduce an unnecessary O(N^3) loop in lexer.
The lexer's handling of operators contained an O(N^3) hazard when
dealing with long strings of + or - characters; it seems hard to
prevent this case from being O(N^2), but the additional N multiplier
was not needed.

Backpatch all the way since this has been there since 7.x, and it
presents at least a mild hazard in that trying to do Bind, PREPARE or
EXPLAIN on a hostile query could take excessive time (without
honouring cancels or timeouts) even if the query was never executed.
2018-08-23 21:42:40 +01:00
Tom Lane 5ca0077419 In libpq, don't look up all the hostnames at once.
Historically, we looked up the target hostname in connectDBStart, so that
PQconnectPoll did not need to do DNS name resolution.  The patches that
added multiple-target-host support to libpq preserved this division of
labor; but it's really nonsensical now, because it means that if any one
of the target hosts fails to resolve in DNS, the connection fails.  That
negates the no-single-point-of-failure goal of the feature.  Additionally,
DNS lookups aren't exactly cheap, but the code did them all even if the
first connection attempt succeeds.

Hence, rearrange so that PQconnectPoll does the lookups, and only looks
up a hostname when it's time to try that host.  This does mean that
PQconnectPoll could block on a DNS lookup --- but if you wanted to avoid
that, you should be using hostaddr, as the documentation has always
specified.  It seems fairly unlikely that any applications would really
care whether the lookup occurs inside PQconnectStart or PQconnectPoll.

In addition to calling out that fact explicitly, do some other minor
wordsmithing in the docs around the multiple-target-host feature.

Since this seems like a bug in the multiple-target-host feature,
backpatch to v10 where that was introduced.  In the back branches,
avoid moving any existing fields of struct pg_conn, just in case
any third-party code is looking into that struct.

Tom Lane, reviewed by Fabien Coelho

Discussion: https://postgr.es/m/4913.1533827102@sss.pgh.pa.us
2018-08-23 16:39:36 -04:00
Tom Lane e1d19c902e Require a C99-compliant snprintf(), and remove related workarounds.
Since our substitute snprintf now returns a C99-compliant result,
there's no need anymore to have complicated code to cope with pre-C99
behavior.  We can just make configure substitute snprintf.c if it finds
that the system snprintf() is pre-C99.  (Note: I do not believe that
there are any platforms where this test will trigger that weren't
already being rejected due to our other C99-ish feature requirements for
snprintf.  But let's add the check for paranoia's sake.)  Then, simplify
the call sites that had logic to cope with the pre-C99 definition.

I also dropped some stuff that was being paranoid about the possibility
of snprintf overrunning the given buffer.  The only reports we've ever
heard of that being a problem were for Solaris 7, which is long dead,
and we've sure not heard any reports of these assertions triggering in
a long time.  So let's drop that complexity too.

Likewise, drop some code that wasn't trusting snprintf to set errno
when it returns -1.  That would be not-per-spec, and again there's
no real reason to believe it is a live issue, especially not for
snprintfs that pass all of configure's feature checks.

Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
2018-08-16 13:01:09 -04:00
Tom Lane cc4f6b7786 Clean up assorted misuses of snprintf()'s result value.
Fix a small number of places that were testing the result of snprintf()
but doing so incorrectly.  The right test for buffer overrun, per C99,
is "result >= bufsize" not "result > bufsize".  Some places were also
checking for failure with "result == -1", but the standard only says
that a negative value is delivered on failure.

(Note that this only makes these places correct if snprintf() delivers
C99-compliant results.  But at least now these places are consistent
with all the other places where we assume that.)

Also, make psql_start_test() and isolation_start_test() check for
buffer overrun while constructing their shell commands.  There seems
like a higher risk of overrun, with more severe consequences, here
than there is for the individual file paths that are made elsewhere
in the same functions, so this seemed like a worthwhile change.

Also fix guc.c's do_serialize() to initialize errno = 0 before
calling vsnprintf.  In principle, this should be unnecessary because
vsnprintf should have set errno if it returns a failure indication ...
but the other two places this coding pattern is cribbed from don't
assume that, so let's be consistent.

These errors are all very old, so back-patch as appropriate.  I think
that only the shell command overrun cases are even theoretically
reachable in practice, but there's not much point in erroneous error
checks.

Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
2018-08-15 16:29:31 -04:00
Tom Lane 1e6e98f763 Fix libpq's implementation of per-host connection timeouts.
Commit 5f374fe7a attempted to turn the connect_timeout from an overall
maximum time limit into a per-host limit, but it didn't do a great job of
that.  The timer would only get restarted if we actually detected timeout
within connectDBComplete(), not if we changed our attention to a new host
for some other reason.  In that case the old timeout continued to run,
possibly causing a premature timeout failure for the new host.

Fix that, and also tweak the logic so that if we do get a timeout,
we advance to the next available IP address, not to the next host name.
There doesn't seem to be a good reason to assume that all the IP
addresses supplied for a given host name will necessarily fail the
same way as the current one.  Moreover, this conforms better to the
admittedly-vague documentation statement that the timeout is "per
connection attempt".  I changed that to "per host name or IP address"
to be clearer.  (Note that reconnections to the same server, such as for
switching protocol version or SSL status, don't get their own separate
timeout; that was true before and remains so.)

Also clarify documentation about the interpretation of connect_timeout
values less than 2.

This seems like a bug, so back-patch to v10 where this logic came in.

Tom Lane, reviewed by Fabien Coelho

Discussion: https://postgr.es/m/5735.1533828184@sss.pgh.pa.us
2018-08-13 13:07:52 -04:00
Tom Lane 1eee8d4994 Remove unwanted "garbage cleanup" logic in Makefiles.
GNUmakefile.in defined a macro "garbage" that seems to have been meant
as a suitable target for automatic "rm -rf" treatment, but it isn't
actually used anywhere (and indeed never was, AFAICT).

Moreover, we have concluded that the Makefiles shouldn't take it upon
themselves to remove files that aren't expected by-products of building,
so that doing anything like that would be against project policy anyway.
Hence, just remove the macro.

Grepping around finds another violation of that policy in ecpg/preproc,
so clean that up too.

Daniel Gustafsson (ecpg change by me)

Discussion: https://postgr.es/m/AFBEF63E-E19D-4EBB-9F08-4617CDC751ED@yesql.se
2018-08-08 14:32:29 -04:00
Tom Lane d1c6a14bac Fix failure to reset libpq's state fully between connection attempts.
The logic in PQconnectPoll() did not take care to ensure that all of
a PGconn's internal state variables were reset before trying a new
connection attempt.  If we got far enough in the connection sequence
to have changed any of these variables, and then decided to try a new
server address or server name, the new connection might be completed
with some state that really only applied to the failed connection.

While this has assorted bad consequences, the only one that is clearly
a security issue is that password_needed didn't get reset, so that
if the first server asked for a password and the second didn't,
PQconnectionUsedPassword() would return an incorrect result.  This
could be leveraged by unprivileged users of dblink or postgres_fdw
to allow them to use server-side login credentials that they should
not be able to use.

Other notable problems include the possibility of forcing a v2-protocol
connection to a server capable of supporting v3, or overriding
"sslmode=prefer" to cause a non-encrypted connection to a server that
would have accepted an encrypted one.  Those are certainly bugs but
it's harder to paint them as security problems in themselves.  However,
forcing a v2-protocol connection could result in libpq having a wrong
idea of the server's standard_conforming_strings setting, which opens
the door to SQL-injection attacks.  The extent to which that's actually
a problem, given the prerequisite that the attacker needs control of
the client's connection parameters, is unclear.

These problems have existed for a long time, but became more easily
exploitable in v10, both because it introduced easy ways to force libpq
to abandon a connection attempt at a late stage and then try another one
(rather than just giving up), and because it provided an easy way to
specify multiple target hosts.

Fix by rearranging PQconnectPoll's state machine to provide centralized
places to reset state properly when moving to a new target host or when
dropping and retrying a connection to the same host.

Tom Lane, reviewed by Noah Misch.  Our thanks to Andrew Krasichkov
for finding and reporting the problem.

Security: CVE-2018-10915
2018-08-06 10:53:35 -04:00
Heikki Linnakangas 77291139c7 Remove support for tls-unique channel binding.
There are some problems with the tls-unique channel binding type. It's not
supported by all SSL libraries, and strictly speaking it's not defined for
TLS 1.3 at all, even though at least in OpenSSL, the functions used for it
still seem to work with TLS 1.3 connections. And since we had no
mechanism to negotiate what channel binding type to use, there would be
awkward interoperability issues if a server only supported some channel
binding types. tls-server-end-point seems feasible to support with any SSL
library, so let's just stick to that.

This removes the scram_channel_binding libpq option altogether, since there
is now only one supported channel binding type.

This also removes all the channel binding tests from the SSL test suite.
They were really just testing the scram_channel_binding option, which
is now gone. Channel binding is used if both client and server support it,
so it is used in the existing tests. It would be good to have some tests
specifically for channel binding, to make sure it really is used, and the
different combinations of a client and a server that support or doesn't
support it. The current set of settings we have make it hard to write such
tests, but I did test those things manually, by disabling
HAVE_BE_TLS_GET_CERTIFICATE_HASH and/or
HAVE_PGTLS_GET_PEER_CERTIFICATE_HASH.

I also removed the SCRAM_CHANNEL_BINDING_TLS_END_POINT constant. This is a
matter of taste, but IMO it's more readable to just use the
"tls-server-end-point" string.

Refactor the checks on whether the SSL library supports the functions
needed for tls-server-end-point channel binding. Now the server won't
advertise, and the client won't choose, the SCRAM-SHA-256-PLUS variant, if
compiled with an OpenSSL version too old to support it.

In the passing, add some sanity checks to check that the chosen SASL
mechanism, SCRAM-SHA-256 or SCRAM-SHA-256-PLUS, matches whether the SCRAM
exchange used channel binding or not. For example, if the client selects
the non-channel-binding variant SCRAM-SHA-256, but in the SCRAM message
uses channel binding anyway. It's harmless from a security point of view,
I believe, and I'm not sure if there are some other conditions that would
cause the connection to fail, but it seems better to be strict about these
things and check explicitly.

Discussion: https://www.postgresql.org/message-id/ec787074-2305-c6f4-86aa-6902f98485a4%40iki.fi
2018-08-05 13:44:21 +03:00
Tom Lane 24986c9552 Change libpq's internal uses of PQhost() to inspect host field directly.
Commit 1944cdc98 changed PQhost() to return the hostaddr value when that
is specified and host isn't.  This is a good idea in general, but
fe-auth.c and related files contain PQhost() calls for which it isn't.
Specifically, when we compare SSL certificates or other server identity
information to the host field, we do not want to use hostaddr instead;
that's not what's documented, that's not what happened pre-v10, and
it doesn't seem like a good idea.

Instead, we can just look at connhost[].host directly.  This does what
we want in v10 and up; in particular, if neither host nor hostaddr
were given, the host field will be replaced with the default host name.
That seems useful, and it's likely the reason that these places were
coded to call PQhost() originally (since pre-v10, the stored field was
not replaced with the default).

Back-patch to v10, as 1944cdc98 (just) was.

Discussion: https://postgr.es/m/23287.1533227021@sss.pgh.pa.us
2018-08-03 12:12:10 -04:00
Tom Lane e3f99e03e2 Fix libpq's code for searching .pgpass; rationalize empty-list-item cases.
Before v10, we always searched ~/.pgpass using the host parameter,
and nothing else, to match to the "hostname" field of ~/.pgpass.
(However, null host or host matching DEFAULT_PGSOCKET_DIR was replaced by
"localhost".)  In v10, this got broken by commit 274bb2b38, repaired by
commit bdac9836d, and broken again by commit 7b02ba62e; in the code
actually shipped, we'd search with hostaddr if both that and host were
specified --- though oddly, *not* if only hostaddr were specified.
Since this is directly contrary to the documentation, and not
backwards-compatible, it's clearly a bug.

However, the change wasn't totally without justification, even though it
wasn't done quite right, because the pre-v10 behavior has arguably been
buggy since we added hostaddr.  If hostaddr is specified and host isn't,
the pre-v10 code will search ~/.pgpass for "localhost", and ship that
password off to a server that most likely isn't local at all.  That's
unhelpful at best, and could be a security breach at worst.

Therefore, rather than just revert to that old behavior, let's define
the behavior as "search with host if provided, else with hostaddr if
provided, else search for localhost".  (As before, a host name matching
DEFAULT_PGSOCKET_DIR is replaced by localhost.)  This matches the
behavior of the actual connection code, so that we don't pick up an
inappropriate password; and it allows useful searches to happen when
only hostaddr is given.

While we're messing around here, ensure that empty elements within a
host or hostaddr list select the same behavior as a totally-empty
field would; for instance "host=a,,b" is equivalent to "host=a,/tmp,b"
if DEFAULT_PGSOCKET_DIR is /tmp.  Things worked that way in some cases
already, but not consistently so, which contributed to the confusion
about what key ~/.pgpass would get searched with.

Update documentation accordingly, and also clarify some nearby text.

Back-patch to v10 where the host/hostaddr list functionality was
introduced.

Discussion: https://postgr.es/m/30805.1532749137@sss.pgh.pa.us
2018-08-01 12:30:36 -04:00
Heikki Linnakangas b90d97e081 Fix error message when a hostaddr cannot be parsed.
We were incorrectly passing hostname, not hostaddr, in the error message,
and because of that, you got:

$ psql 'hostaddr=foo'
psql: could not parse network address "(null)": Name or service not known

Backpatch to v10, where this was broken (by commit 7b02ba62e9).

Report and fix by Robert Haas.

Discussion: https://www.postgresql.org/message-id/CA+TgmoapFQA30NomGKEaZCu3iN7mF7fux8fbbk9SouVOT2JP7w@mail.gmail.com
2018-07-19 20:24:29 +03:00
Heikki Linnakangas 6b387179ba Fix misc typos, mostly in comments.
A collection of typos I happened to spot while reading code, as well as
grepping for common mistakes.

Backpatch to all supported versions, as applicable, to avoid conflicts
when backporting other commits in the future.
2018-07-18 16:17:32 +03:00
Alexander Korotkov edf59c40dd Fix more wrong paths in header comments
It appears that there are more files, whose header comment paths are
wrong.  So, fix those paths.  No backpatching per proposal of Tom Lane.

Discussion: https://postgr.es/m/CAPpHfdsJyYbOj59MOQL%2B4XxdcomLSLfLqBtAvwR%2BpsCqj3ELdQ%40mail.gmail.com
2018-07-11 17:57:04 +03:00
Alvaro Herrera f2c587067a Rethink how to get float.h in old Windows API for isnan/isinf
We include <float.h> in every place that needs isnan(), because MSVC
used to require it.  However, since MSVC 2013 that's no longer necessary
(cf. commit cec8394b5c), so we can retire the inclusion to a
version-specific stanza in win32_port.h, where it doesn't need to
pollute random .c files.  The header is of course still needed in a few
places for other reasons.

I (Álvaro) removed float.h from a few more files than in Emre's original
patch.  This doesn't break the build in my system, but we'll see what
the buildfarm has to say about it all.

Author: Emre Hasegeli
Discussion: https://postgr.es/m/CAE2gYzyc0+5uG+Cd9-BSL7NKC8LSHLNg1Aq2=8ubjnUwut4_iw@mail.gmail.com
2018-07-11 09:11:48 -04:00
Andrew Dunstan feced1387f Stamp HEAD as 12devel
Let the hacking begin ...
2018-06-30 12:47:59 -04:00
Michael Paquier dad5f8a3d5 Make capitalization of term "OpenSSL" more consistent
This includes code comments and documentation.  No backpatch as this is
cosmetic even if there are documentation changes which are user-facing.

Author: Daniel Gustafsson
Discussion: https://postgr.es/m/BB89928E-2BC7-489E-A5E4-6D204B3954CF@yesql.se
2018-06-29 09:45:44 +09:00
Peter Eisentraut 299addd592 Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 884f33d735870f94357820800840af3e93ff4628
2018-06-25 12:37:18 +02:00
Thomas Munro 4c8156d871 Add PGTYPESchar_free() to avoid cross-module problems on Windows.
On Windows, it is sometimes important for corresponding malloc() and
free() calls to be made from the same DLL, since some build options can
result in multiple allocators being active at the same time.  For that
reason we already provided PQfreemem().  This commit adds a similar
function for freeing string results allocated by the pgtypes library.

Author: Takayuki Tsunakawa
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F8AD5D6%40G01JPEXMBYT05
2018-06-18 18:33:53 +12:00
Tom Lane 6b74f5eaad Avoid unnecessary use of strncpy in a couple of places in ecpg.
Use of strncpy with a length limit based on the source, rather than
the destination, is non-idiomatic and draws warnings from gcc 8.
Replace with memcpy, which does exactly the same thing in these cases,
but with less chance for confusion.

Backpatch to all supported branches.

Discussion: https://postgr.es/m/21789.1529170195@sss.pgh.pa.us
2018-06-16 14:58:11 -04:00
Andrew Dunstan 3a7cc727c7 Don't fall off the end of perl functions
This complies with the perlcritic policy
Subroutines::RequireFinalReturn, which is a severity 4 policy. Since we
only currently check at severity level 5, the policy is raised to that
level until we move to level 4 or lower, so that any new infringements
will be caught.

A small cosmetic piece of tidying of the pgperlcritic script is
included.

Mike Blackwell

Discussion: https://postgr.es/m/CAESHdJpfFm_9wQnQ3koY3c91FoRQsO-fh02za9R3OEMndOn84A@mail.gmail.com
2018-05-27 09:08:42 -04:00
Peter Eisentraut 917a68f010 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 3a5a71cccad5c68e01008e9e3a4f06930197a05e
2018-05-21 12:29:52 -04:00
Tom Lane c6e846446d printf("%lf") is not portable, so omit the "l".
The "l" (ell) width spec means something in the corresponding scanf usage,
but not here.  While modern POSIX says that applying "l" to "f" and other
floating format specs is a no-op, SUSv2 says it's undefined.  Buildfarm
experience says that some old compilers emit warnings about it, and at
least one old stdio implementation (mingw's "ANSI" option) actually
produces wrong answers and/or crashes.

Discussion: https://postgr.es/m/21670.1526769114@sss.pgh.pa.us
Discussion: https://postgr.es/m/c085e1da-0d64-1c15-242d-c921f32e0d5c@dunslane.net
2018-05-20 11:40:54 -04:00
Tom Lane 5e79405d82 Hot-fix ecpg regression test for missing ecpg_config.h inclusion.
I don't think this is really the best long-term answer, and in
particular it doesn't fix the pre-existing hazard in sqltypes.h.
But for the moment let's just try to make the buildfarm green again.

Discussion: https://postgr.es/m/151935568942.1461.14623890240535309745@wrigleys.postgresql.org
2018-05-18 19:03:32 -04:00
Tom Lane dbccdd375b Add some test coverage for ecpg's "long long" support.
This will only actually exercise the "long long" code paths on platforms
where "long" is 32 bits --- otherwise, the SQL bigint type maps to
plain "long", and we will test that code path instead.  But that's
probably sufficient coverage, and anyway we weren't testing either
code path before.

Dang Minh Huong, tweaked a bit by me

Discussion: https://postgr.es/m/151935568942.1461.14623890240535309745@wrigleys.postgresql.org
2018-05-18 13:04:59 -04:00
Andres Freund 2993435dba Further -Wimplicit-fallthrough cleanup.
Tom's earlier commit in 41c912cad1 didn't update a few cases that
are only encountered with the non-standard --with-llvm config
flag. Additionally there's also one case that appears to be a
deficiency in gcc's (up to trunk as of a few days ago) detection of
"fallthrough" comments - changing the placement slightly fixes that.

Author: Andres Freund
Discussion: https://postgr.es/m/20180502003239.wfnqu7ekz7j7imm4@alap3.anarazel.de
2018-05-01 19:53:48 -07:00
Tom Lane 41c912cad1 Clean up warnings from -Wimplicit-fallthrough.
Recent gcc can warn about switch-case fall throughs that are not
explicitly labeled as intentional.  This seems like a good thing,
so clean up the warnings exposed thereby by labeling all such
cases with comments that gcc will recognize.

In files that already had one or more suitable comments, I generally
matched the existing style of those.  Otherwise I went with
/* FALLTHROUGH */, which is one of the spellings approved at the
more-restrictive-than-default level -Wimplicit-fallthrough=4.
(At the default level you can also spell it /* FALL ?THRU */,
and it's not picky about case.  What you can't do is include
additional text in the same comment, so some existing comments
containing versions of this aren't good enough.)

Testing with gcc 8.0.1 (Fedora 28's current version), I found that
I also had to put explicit "break"s after elog(ERROR) or ereport(ERROR);
apparently, for this purpose gcc doesn't recognize that those don't
return.  That seems like possibly a gcc bug, but it's fine because
in most places we did that anyway; so this amounts to a visit from the
style police.

Discussion: https://postgr.es/m/15083.1525207729@sss.pgh.pa.us
2018-05-01 19:35:08 -04:00
Peter Eisentraut 76ece16974 perltidy: Add option --nooutdent-long-comments 2018-04-27 11:37:43 -04:00
Peter Eisentraut d4f16d5071 perltidy: Add option --nooutdent-long-quotes 2018-04-27 11:37:43 -04:00
Tom Lane bdf46af748 Post-feature-freeze pgindent run.
Discussion: https://postgr.es/m/15719.1523984266@sss.pgh.pa.us
2018-04-26 14:47:16 -04:00
Tom Lane d25c2ee9c0 In libpq, free any partial query result before collecting a server error.
We'd throw away the partial result anyway after parsing the error message.
Throwing it away beforehand costs nothing and reduces the risk of
out-of-memory failure.  Also, at least in systems that behave like
glibc/Linux, if the partial result was very large then the error PGresult
would get allocated at high heap addresses, preventing the heap storage
used by the partial result from being released to the OS until the error
PGresult is freed.

In psql >= 9.6, we hold onto the error PGresult until another error is
received (for \errverbose), so that this behavior causes a seeming
memory leak to persist for awhile, as in a recent complaint from
Darafei Praliaskouski.  This is a potential performance regression from
older versions, justifying back-patching at least that far.  But similar
behavior may occur in other client applications, so it seems worth just
back-patching to all supported branches.

Discussion: https://postgr.es/m/CAC8Q8tJ=7cOkPePyAbJE_Pf691t8nDFhJp0KZxHvnq_uicfyVg@mail.gmail.com
2018-04-13 12:53:45 -04:00
Simon Riggs 08ea7a2291 Revert MERGE patch
This reverts commits d204ef6377,
83454e3c2b and a few more commits thereafter
(complete list at the end) related to MERGE feature.

While the feature was fully functional, with sufficient test coverage and
necessary documentation, it was felt that some parts of the executor and
parse-analyzer can use a different design and it wasn't possible to do that in
the available time. So it was decided to revert the patch for PG11 and retry
again in the future.

Thanks again to all reviewers and bug reporters.

List of commits reverted, in reverse chronological order:

 f1464c5380 Improve parse representation for MERGE
 ddb4158579 MERGE syntax diagram correction
 530e69e59b Allow cpluspluscheck to pass by renaming variable
 01b88b4df5 MERGE minor errata
 3af7b2b0d4 MERGE fix variable warning in non-assert builds
 a5d86181ec MERGE INSERT allows only one VALUES clause
 4b2d44031f MERGE post-commit review
 4923550c20 Tab completion for MERGE
 aa3faa3c7a WITH support in MERGE
 83454e3c2b New files for MERGE
 d204ef6377 MERGE SQL Command following SQL:2016

Author: Pavan Deolasee
Reviewed-by: Michael Paquier
2018-04-12 11:22:56 +01:00
Tom Lane 9c0a0de4c9 Switch client-side code to include catalog/pg_foo_d.h not pg_foo.h.
Everything of use to frontend code should now appear in the _d.h files,
and making this change frees us from needing to worry about whether the
catalog header files proper are frontend-safe.

Remove src/interfaces/ecpg/ecpglib/pg_type.h entirely, as the previous
commit reduced it to a confusingly-named wrapper around pg_type_d.h.

In passing, make test_rls_hooks.c follow project convention of including
our own files with #include "" not <>.

Discussion: https://postgr.es/m/23690.1523031777@sss.pgh.pa.us
2018-04-08 13:59:52 -04:00
Tom Lane 372728b0d4 Replace our traditional initial-catalog-data format with a better design.
Historically, the initial catalog data to be installed during bootstrap
has been written in DATA() lines in the catalog header files.  This had
lots of disadvantages: the format was badly underdocumented, it was
very difficult to edit the data in any mechanized way, and due to the
lack of any abstraction the data was verbose, hard to read/understand,
and easy to get wrong.

Hence, move this data into separate ".dat" files and represent it in a way
that can easily be read and rewritten by Perl scripts.  The new format is
essentially "key => value" for each column; while it's a bit repetitive,
explicit labeling of each value makes the data far more readable and less
error-prone.  Provide a way to abbreviate entries by omitting field values
that match a specified default value for their column.  This allows removal
of a large amount of repetitive boilerplate and also lowers the barrier to
adding new columns.

Also teach genbki.pl how to translate symbolic OID references into
numeric OIDs for more cases than just "regproc"-like pg_proc references.
It can now do that for regprocedure-like references (thus solving the
problem that regproc is ambiguous for overloaded functions), operators,
types, opfamilies, opclasses, and access methods.  Use this to turn
nearly all OID cross-references in the initial data into symbolic form.
This represents a very large step forward in readability and error
resistance of the initial catalog data.  It should also reduce the
difficulty of renumbering OID assignments in uncommitted patches.

Also, solve the longstanding problem that frontend code that would like to
use OID macros and other information from the catalog headers often had
difficulty with backend-only code in the headers.  To do this, arrange for
all generated macros, plus such other declarations as we deem fit, to be
placed in "derived" header files that are safe for frontend inclusion.
(Once clients migrate to using these pg_*_d.h headers, it will be possible
to get rid of the pg_*_fn.h headers, which only exist to quarantine code
away from clients.  That is left for follow-on patches, however.)

The now-automatically-generated macros include the Anum_xxx and Natts_xxx
constants that we used to have to update by hand when adding or removing
catalog columns.

Replace the former manual method of generating OID macros for pg_type
entries with an automatic method, ensuring that all built-in types have
OID macros.  (But note that this patch does not change the way that
OID macros for pg_proc entries are built and used.  It's not clear that
making that match the other catalogs would be worth extra code churn.)

Add SGML documentation explaining what the new data format is and how to
work with it.

Despite being a very large change in the catalog headers, there is no
catversion bump here, because postgres.bki and related output files
haven't changed at all.

John Naylor, based on ideas from various people; review and minor
additional coding by me; previous review by Alvaro Herrera

Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 13:17:27 -04:00
Tom Lane dddfc4cb2e Prevent accidental linking of system-supplied copies of libpq.so etc.
We were being careless in some places about the order of -L switches in
link command lines, such that -L switches referring to external directories
could come before those referring to directories within the build tree.
This made it possible to accidentally link a system-supplied library, for
example /usr/lib/libpq.so, in place of the one built in the build tree.
Hilarity ensued, the more so the older the system-supplied library is.

To fix, break LDFLAGS into two parts, a sub-variable LDFLAGS_INTERNAL
and the main LDFLAGS variable, both of which are "recursively expanded"
so that they can be incrementally adjusted by different makefiles.
Establish a policy that -L switches for directories in the build tree
must always be added to LDFLAGS_INTERNAL, while -L switches for external
directories must always be added to LDFLAGS.  This is sufficient to
ensure a safe search order.  For simplicity, we typically also put -l
switches for the respective libraries into those same variables.
(Traditional make usage would have us put -l switches into LIBS, but
cleaning that up is a project for another day, as there's no clear
need for it.)

This turns out to also require separating SHLIB_LINK into two variables,
SHLIB_LINK and SHLIB_LINK_INTERNAL, with a similar rule about which
switches go into which variable.  And likewise for PG_LIBS.

Although this change might appear to affect external users of pgxs.mk,
I think it doesn't; they shouldn't have any need to touch the _INTERNAL
variables.

In passing, tweak src/common/Makefile so that the value of CPPFLAGS
recorded in pg_config lacks "-DFRONTEND" and the recorded value of
LDFLAGS lacks "-L../../../src/common".  Both of those things are
mistakes, apparently introduced during prior code rearrangements,
as old versions of pg_config don't print them.  In general we don't
want anything that's specific to the src/common subdirectory to
appear in those outputs.

This is certainly a bug fix, but in view of the lack of field
complaints, I'm unsure whether it's worth the risk of back-patching.
In any case it seems wise to see what the buildfarm makes of it first.

Discussion: https://postgr.es/m/25214.1522604295@sss.pgh.pa.us
2018-04-03 16:26:05 -04:00
Simon Riggs d204ef6377 MERGE SQL Command following SQL:2016
MERGE performs actions that modify rows in the target table
using a source table or query. MERGE provides a single SQL
statement that can conditionally INSERT/UPDATE/DELETE rows
a task that would other require multiple PL statements.
e.g.

MERGE INTO target AS t
USING source AS s
ON t.tid = s.sid
WHEN MATCHED AND t.balance > s.delta THEN
  UPDATE SET balance = t.balance - s.delta
WHEN MATCHED THEN
  DELETE
WHEN NOT MATCHED AND s.delta > 0 THEN
  INSERT VALUES (s.sid, s.delta)
WHEN NOT MATCHED THEN
  DO NOTHING;

MERGE works with regular and partitioned tables, including
column and row security enforcement, as well as support for
row, statement and transition triggers.

MERGE is optimized for OLTP and is parameterizable, though
also useful for large scale ETL/ELT. MERGE is not intended
to be used in preference to existing single SQL commands
for INSERT, UPDATE or DELETE since there is some overhead.
MERGE can be used statically from PL/pgSQL.

MERGE does not yet support inheritance, write rules,
RETURNING clauses, updatable views or foreign tables.
MERGE follows SQL Standard per the most recent SQL:2016.

Includes full tests and documentation, including full
isolation tests to demonstrate the concurrent behavior.

This version written from scratch in 2017 by Simon Riggs,
using docs and tests originally written in 2009. Later work
from Pavan Deolasee has been both complex and deep, leaving
the lead author credit now in his hands.
Extensive discussion of concurrency from Peter Geoghegan,
with thanks for the time and effort contributed.

Various issues reported via sqlsmith by Andreas Seltenreich

Authors: Pavan Deolasee, Simon Riggs
Reviewer: Peter Geoghegan, Amit Langote, Tomas Vondra, Simon Riggs

Discussion:
https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com
https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
2018-04-03 09:28:16 +01:00
Simon Riggs 7cf8a5c302 Revert "Modified files for MERGE"
This reverts commit 354f13855e.
2018-04-02 21:34:15 +01:00
Simon Riggs 354f13855e Modified files for MERGE 2018-04-02 21:12:47 +01:00
Tom Lane 0b11a674fb Fix a boatload of typos in C comments.
Justin Pryzby

Discussion: https://postgr.es/m/20180331105640.GK28454@telsasoft.com
2018-04-01 15:01:28 -04:00
Peter Eisentraut 1944cdc982 libpq: PQhost to return active connected host or hostaddr
Previously, PQhost didn't return the connected host details when the
connection type was CHT_HOST_ADDRESS (i.e., via hostaddr).  Instead, it
returned the complete host connection parameter (which could contain
multiple hosts) or the default host details, which was confusing and
arguably incorrect.

Change this to return the actually connected host or hostaddr
irrespective of the connection type.  When hostaddr but no host was
specified, hostaddr is now returned.  Never return the original host
connection parameter, and document that PQhost cannot be relied on
before the connection is established.

PQport is similarly changed to always return the active connection port
and never the original connection parameter.

Author: Hari Babu <kommi.haribabu@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
2018-03-27 12:32:18 -04:00
Tom Lane 4b538727e2 Fix make rules that generate multiple output files.
For years, our makefiles have correctly observed that "there is no correct
way to write a rule that generates two files".  However, what we did is to
provide empty rules that "generate" the secondary output files from the
primary one, and that's not right either.  Depending on the details of
the creating process, the primary file might end up timestamped later than
one or more secondary files, causing subsequent make runs to consider the
secondary file(s) out of date.  That's harmless in a plain build, since
make will just re-execute the empty rule and nothing happens.  But it's
fatal in a VPATH build, since make will expect the secondary file to be
rebuilt in the build directory.  This would manifest as "file not found"
failures during VPATH builds from tarballs, if we were ever unlucky enough
to ship a tarball with apparently out-of-date secondary files.  (It's not
clear whether that has ever actually happened, but it definitely could.)

To ensure that secondary output files have timestamps >= their primary's,
change our makefile convention to be that we provide a "touch $@" action
not an empty rule.  Also, make sure that this rule actually gets invoked
during a distprep run, else the hazard remains.

It's been like this a long time, so back-patch to all supported branches.

In HEAD, I skipped the changes in src/backend/catalog/Makefile, because
those rules are due to get replaced soon in the bootstrap data format
patch, and there seems no need to create a merge issue for that patch.
If for some reason we fail to land that patch in v11, we'll need to
back-fill the changes in that one makefile from v10.

Discussion: https://postgr.es/m/18556.1521668179@sss.pgh.pa.us
2018-03-23 13:46:00 -04:00
Peter Eisentraut fdb78948d8 Fix whitespace 2018-03-22 22:36:17 -04:00
Peter Eisentraut a364dfa4ac Attempt to fix build with unusual OpenSSL versions
Since e3bdb2d926, libpq failed to build on
some platforms because they did not have SSL_clear_options().  Although
mainline OpenSSL introduced SSL_clear_options() after
SSL_OP_NO_COMPRESSION, so the code should have built fine, at least an
old NetBSD version (build farm "coypu" NetBSD 5.1 gcc 4.1.3 PR-20080704
powerpc) has SSL_OP_NO_COMPRESSION but no SSL_clear_options().

So add a configure check for SSL_clear_options().  If we don't find it,
skip the call.  That means on such a platform one cannot *enable* SSL
compression if the built-in default is off, but that seems an unlikely
combination anyway and not very interesting in practice.
2018-03-20 16:44:52 -04:00
Michael Meskes e400840b1d Fix closing of incorrectly named cursor.
Patch by "Shinoda, Noriyoshi" <noriyoshi.shinoda@hpe.com>
2018-03-17 18:15:33 +01:00
Peter Eisentraut e3bdb2d926 Set libpq sslcompression to off by default
Since SSL compression is no longer recommended, turn the default in
libpq from on to off.

OpenSSL 1.1.0 and many distribution packages already turn compression
off by default, so such a server won't accept compression anyway.  So
this will mainly affect users of older OpenSSL installations.

Also update the documentation to make clear that this setting is no
longer recommended.

Discussion: https://www.postgresql.org/message-id/flat/595cf3b1-4ffe-7f05-6f72-f72b7afa7993%402ndquadrant.com
2018-03-17 09:17:33 -04:00
Peter Eisentraut 3a4b891964 Fix more format truncation issues
Fix the warnings created by the compiler warning options
-Wformat-overflow=2 -Wformat-truncation=2, supported since GCC 7.  This
is a more aggressive variant of the fixes in
6275f5d28a, which GCC 7 warned about by
default.

The issues are all harmless, but some dubious coding patterns are
cleaned up.

One issue that is of external interest is that BGW_MAXLEN is increased
from 64 to 96.  Apparently, the old value would cause the bgw_name of
logical replication workers to be truncated in some circumstances.

But this doesn't actually add those warning options.  It appears that
the warnings depend a bit on compilation and optimization options, so it
would be annoying to have to keep up with that.  This is more of a
once-in-a-while cleanup.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-03-15 11:41:42 -04:00
Michael Meskes 20ba33dade Fixed compiler warnings in test case. 2018-03-14 17:18:15 +01:00
Michael Meskes 3b7ab43804 Add Oracle like handling of char arrays.
In some cases Oracle Pro*C handles char array differently than ECPG. This patch
adds a Oracle compatibility mode to make ECPG behave like Pro*C.

Patch by David Rader <davidr@openscg.com>
2018-03-14 00:54:13 +01:00
Michael Meskes db2fc801f6 Fix double frees in ecpg.
Patch by Patrick Krecker <patrick@judicata.com>
2018-03-14 00:51:17 +01:00
Michael Meskes b32fad52e9 Set connection back to NULL after freeing it.
Patch by Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>
2018-03-13 16:22:28 +01:00