Coverity complained that we might pass a null pointer to strcmp()
if PQresultErrorField were to return NULL. That shouldn't be possible,
since the server is supposed to always provide some SQLSTATE or other
in an error message. But we usually defend against such hazards, and
it only takes a little more code to do so here.
There's no good reason to think this is a live bug, so no back-patch.
If we forcibly place a Material node atop a finished subplan, we need
to move any initPlans attached to the subplan up to the Material node,
in order to keep SS_finalize_plan() happy. I'd figured this out in
commit 7b67a0a49 for the case of materializing a cursor plan, but out of
an abundance of caution, I put the initPlan movement hack at the call
site for that case, rather than inside materialize_finished_plan().
That was the wrong thing, because it turns out to also be necessary for
the only other caller of materialize_finished_plan(), ie subselect.c.
We lacked any test cases that exposed the mistake, but bug#14524 from
Wei Congrui shows that it's possible to get an initPlan reference into
the top tlist in that case too, and then SS_finalize_plan() complains.
Hence, move the hack into materialize_finished_plan().
In HEAD, also relocate some recently-added tests in subselect.sql, which
I'd unthinkingly dropped into the middle of a sequence of related tests.
Report: https://postgr.es/m/20170202060020.1400.89021@wrigleys.postgresql.org
Given a targetlist like "srf(x), f(srf(x))", split_pathtarget_at_srfs()
decided that it needed two levels of ProjectSet nodes, failing to notice
that the two SRF calls are textually equal(). Because of that, setrefs.c
would convert the upper ProjectSet's tlist to "Var1, f(Var1)" (where Var1
represents a reference to the srf(x) output of the lower ProjectSet).
This triggered an assertion in nodeProjectSet.c complaining that it found
no SRFs to evaluate, as reported by Erik Rijkers.
What we want in such a case is to evaluate srf(x) only once and use a plain
Result node to compute "Var1, f(Var1)"; that gives results similar to what
previous versions produced, whereas allowing srf(x) to be evaluated again
in an upper ProjectSet would square the number of rows emitted.
Furthermore, even if the SRF calls aren't textually identical, we want them
to be evaluated in lockstep, because that's what happened in the old
implementation. But split_pathtarget_at_srfs() got this completely wrong,
using two levels of ProjectSet for a case like "srf(x), f(srf(y))".
Hence, rewrite split_pathtarget_at_srfs() from the ground up so that it
groups SRFs according to the depth of nesting of SRFs in their arguments.
This is pretty much how we envisioned that working originally, but I blew
it when it came to implementation.
In passing, optimize the case of target == input_target, which I noticed
is not only possible but quite common.
Discussion: https://postgr.es/m/dcbd2853c05d22088766553d60dc78c6@xs4all.nl
There is no particularly good reason to limit this value to 1000,
so increase the limit to INT_MAX / 2, the same limit we use for
shared_buffers. It's not clear how much practical effect larger
settings will have, but there seems no harm in letting people try it.
Jim Nasby, less a comment change I stripped out.
Discussion: http://postgr.es/m/f6e58a22-030b-eb8a-5457-f62fb08d701c@BlueTreble.com
Patch by Jesper Pedersen and Ashutosh Sharma, with some error handling
improvements by me. Tests from Peter Eisentraut. Reviewed by Álvaro
Herrera, Michael Paquier, Jesper Pedersen, Jeff Janes, Peter
Eisentraut, Amit Kapila, Mithun Cy, and me.
Discussion: http://postgr.es/m/e2ac6c58-b93f-9dd9-f4e6-d6d30add7fdf@redhat.com
Doing so doesn't seem to be within the purpose of the per user
connection limits, and has particularly unfortunate effects in
conjunction with parallel queries.
Backpatch to 9.6 where parallel queries were introduced.
David Rowley, reviewed by Robert Haas and Albe Laurenz.
Add CatalogTupleInsertWithInfo and CatalogTupleUpdateWithInfo to let
callers use the CatalogTupleXXX abstraction layer even in cases where
we want to share the results of CatalogOpenIndexes across multiple
inserts/updates for efficiency. This finishes the job begun in commit
2f5c9d9c9, by allowing some remaining simple_heap_insert/update
calls to be replaced. The abstraction layer is now complete enough
that we don't have to export CatalogIndexInsert at all anymore.
Also, this fixes several places in which 2f5c9d9c9 introduced performance
regressions by using retail CatalogTupleInsert or CatalogTupleUpdate even
though the previous coding had been able to amortize CatalogOpenIndexes
work across multiple tuples.
A possible future improvement is to arrange for the indexing.c functions
to cache the CatalogIndexState somewhere, maybe in the relcache, in which
case we could get rid of CatalogTupleInsertWithInfo and
CatalogTupleUpdateWithInfo again. But that's a task for another day.
Discussion: https://postgr.es/m/27502.1485981379@sss.pgh.pa.us
This extends the work done in commit 2f5c9d9c9 to provide a more nearly
complete abstraction layer hiding the details of index updating for catalog
changes. That commit only invented abstractions for catalog inserts and
updates, leaving nearby code for catalog deletes still calling the
heap-level routines directly. That seems rather ugly from here, and it
does little to help if we ever want to shift to a storage system in which
indexing work is needed at delete time.
Hence, create a wrapper function CatalogTupleDelete(), and replace calls
of simple_heap_delete() on catalog tuples with it. There are now very
few direct calls of [simple_]heap_delete remaining in the tree.
Discussion: https://postgr.es/m/462.1485902736@sss.pgh.pa.us
"\set" with no arguments displays all defined variables, but it does so
in the order that they appear in variables.c's list, which previously
was mostly creation order. That makes the list ugly and hard to find
things in, and it exposes some psql implementation details to users.
(For instance, ordinary variables will move to the bottom of the list
if unset and set again, but variables that have hooks won't.)
Fix that by keeping the list in alphabetical order at all times, which
isn't much more complicated than breaking out of the insertion search
loops once we reach an entry that should be after the one to be inserted.
Discussion: https://postgr.es/m/31785.1485900786@sss.pgh.pa.us
This commit improves on the results of commit 511ae628f in two ways:
1. It restores the historical behavior that "\set FOO" is interpreted
as setting FOO to "on", if FOO is a boolean control variable. We
already found one test script that was expecting that behavior, and
the psql documentation certainly does nothing to discourage people
from assuming that would work, since it often says just "if FOO is set"
when describing the effects of a boolean variable. However, now this
case will result in actually setting FOO to "on", not an empty string.
2. It arranges for an "\unset" of a control variable to set the value
back to its default value, rather than becoming apparently undefined.
The control variables are also initialized that way at psql startup.
In combination, these things guarantee that a control variable always
has a displayable value that reflects what psql is actually doing.
That is a pretty substantial usability improvement.
The implementation involves adding a second type of variable hook function
that is able to replace a proposed new value (including NULL) with another
one. We could alternatively have complicated the API of the assign hook,
but this way seems better since many variables can share the same
substitution hook function.
Also document the actual behavior of these variables more fully,
including covering assorted behaviors that were there before but
never documented.
This patch also includes some minor cleanup that should have been in
511ae628f but was missed.
Patch by me, but it owes a lot to discussions with Daniel Vérité.
Discussion: https://postgr.es/m/9572.1485821620@sss.pgh.pa.us
The rule is that if pg_authid.rolpassword begins with "md5" and has the
right length, it's an MD5 hash, otherwise it's a plaintext password. The
idiom has been to use isMD5() to check for that, but that gets awkward,
when we add new kinds of verifiers, like the verifiers for SCRAM
authentication in the pending SCRAM patch set. Replace isMD5() with a new
get_password_type() function, so that when new verifier types are added, we
don't need to remember to modify every place that currently calls isMD5(),
to also recognize the new kinds of verifiers.
Also, use the new plain_crypt_verify function in passwordcheck, so that it
doesn't need to know about MD5, or in the future, about other kinds of
hashes or password verifiers.
Reviewed by Michael Paquier and Peter Eisentraut.
Discussion: https://www.postgresql.org/message-id/2d07165c-1793-e243-a2a9-e45b624c7580@iki.fi
The "Simplify tape block format" commit ignored the rule that blocks
returned by ltsGetFreeBlock() must be written out in the same order, at
least in the first write pass. To fix, relax that requirement, by making
ltsWriteBlock() to detect if it's about to create a "hole" in the
underlying BufFile, and fill it with zeros instead.
Reported, analysed, and reviewed by Peter Geoghegan.
Discussion: https://www.postgresql.org/message-id/CAM3SWZRWdNtkhiG0GyiX_1mUAypiK3dV6-6542pYe2iEL-foTA@mail.gmail.com
The addition of a TestForOldSnapshot() call here has made the
referent of this comment slightly less clear, so move the comment
to compensate.
Amit Kapila (as part of the parallel index scan patch)
Split the existing CatalogUpdateIndexes into two different routines,
CatalogTupleInsert and CatalogTupleUpdate, which do both the heap
insert/update plus the index update. This removes over 300 lines of
boilerplate code all over src/backend/catalog/ and src/backend/commands.
The resulting code is much more pleasing to the eye.
Also, by encapsulating what happens in detail during an UPDATE, this
facilitates the upcoming WARM patch, which is going to add a few more
lines to the update case making the boilerplate even more boring.
The original CatalogUpdateIndexes is removed; there was only one use
left, and since it's just three lines, we can as well expand it in place
there. We could keep it, but WARM is going to break all the UPDATE
out-of-core callsites anyway, so there seems to be no benefit in doing
so.
Author: Pavan Deolasee
Discussion: https://www.postgr.es/m/CABOikdOcFYSZ4vA2gYfs=M2cdXzXX4qGHeEiW3fu9PCfkHLa2A@mail.gmail.com
In commit 23f34fa, we changed how ACLs were handled to use the new
pg_init_privs catalog and to dump out the ACL commands as REVOKE+GRANT
combinations instead of trying to REVOKE all rights always and then
GRANT back just the ones which were in place.
Unfortunately, the DEFAULT PRIVILEGES system didn't quite get the
correct treatment with this change and ended up (incorrectly) only
including positive GRANTs instead of both the REVOKEs and GRANTs
necessary to preserve the correct privileges.
There are only a couple cases where such REVOKEs are possible because,
generally speaking, there's few rights which exist on objects by
default to be revoked.
Examples of REVOKEs which weren't being correctly preserved are when
privileges are REVOKE'd from the creator/owner, like so:
ALTER DEFAULT PRIVILEGES
FOR ROLE myrole
REVOKE SELECT ON TABLES FROM myrole;
or when other default privileges are being revoked, such as EXECUTE
rights granted to public for functions:
ALTER DEFAULT PRIVILEGES
FOR ROLE myrole
REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;
Fix this by correctly working out what the correct REVOKE statements are
(if any) and dump them out, just as we do for everything else.
Noticed while developing additional regression tests for pg_dump, which
will be landing shortly.
Back-patch to 9.6 where the bug was introduced.
The pg_dump TAP tests have gotten pretty far from what perltidy thinks
they should be, so fix that, and in passing use long-form argument names
with arguments passed via "=" in a similar vein to 58da833.
No functional changes here, just whitespace and changing runs from
"-f" to "--file=", and similar.
As pointed out by Alvaro, we actually use perltidy on the perl scripts
in the source tree, so go back to the results of a perltidy run for the
test_pg_dump TAP script.
To make it look slightly less tragic, I changed most of the independent
arguments into long-form single arguments (eg: -f file.sql changed to be
--file=file.sql) to avoid having them confusingly split across lines due
to perltidy.
Back-patch to 9.6, as the last patch was.
next_token() oddly set its buffer space consumption limit to one before
the last char position in the buffer, not the last as you'd expect.
The reason is there was once an ugly kluge to mark keywords by appending
a newline to them, potentially requiring one more byte. Commit e5e2fc842
removed that kluge, but failed to notice that the length limit could be
increased.
Also, remove some vestigial handling of newline characters in the buffer.
That was left over from when this function read the file directly using
getc(). Commit 7f49a67f9 changed it to read from a buffer, from which
tokenize_file had already removed the only possible occurrence of newline,
but did not simplify this function in consequence.
Also, ensure that we don't return with *lineptr set to someplace past the
terminating '\0'; that would be catastrophic if a caller were to ask for
another token from the same line. This is just latent since no callers
actually do call again after a "false" return; but considering that it was
actually costing us extra code to do it wrong, we might as well make it
bulletproof.
Noted while reviewing pg_hba_file_rules patch.
This view is designed along the same lines as pg_file_settings, to wit
it shows what is currently in the file, not what the postmaster has
loaded as the active settings. That allows it to be used to pre-vet
edits before issuing SIGHUP. As with the earlier view, go out of our
way to allow errors in the file to be reflected in the view, to assist
that use-case.
(We might at some point invent a view to show the current active settings,
but this is not that patch; and it's not trivial to do.)
Haribabu Kommi, reviewed by Ashutosh Bapat, Michael Paquier, Simon Riggs,
and myself
Discussion: https://postgr.es/m/CAJrrPGerH4jiwpcXT1-46QXUDmNp2QDrG9+-Tek_xC8APHShYw@mail.gmail.com
Quite a few of our built-in system views were not exercised anywhere
in the regression tests. This is perhaps not so exciting for the ones
that are simple projections/joins of system catalogs, but for the ones
that are wrappers for set-returning C functions, the omission translates
directly to lack of test coverage for those functions.
In many cases, the reason for the omission is that the view doesn't have
much to do with any specific SQL feature, so there's no natural place to
test it. To remedy that, invent a new script sysviews.sql that's dedicated
to testing SRF-based views. Move a couple of tests that did fit this
charter into the new script, and add simple "count(*)" based tests of
other views within the charter. That's enough to ensure we at least
exercise the main code path through the SRF, although it does little to
prove that the output is sane.
More could be done here, no doubt, and I hope someone will think about
how we can test these views more thoroughly. But this is a starting
point.
Discussion: https://postgr.es/m/19359.1485723741@sss.pgh.pa.us
Previously, if the user set a special variable such as ECHO to an
unrecognized value, psql would bleat but store the new value anyway, and
then fall back to a default setting for the behavior controlled by the
variable. This was agreed to be a not particularly good idea. With
this patch, invalid values result in an error message and no change in
state.
(But this applies only to variables that affect psql's behavior; purely
informational variables such as ENCODING can still be set to random
values.)
To do this, modify the API for psql's assign-hook functions so that they
can return an OK/not OK result, and give them the responsibility for
printing error messages when they reject a value. Adjust the APIs for
ParseVariableBool and ParseVariableNum to support the new behavior
conveniently.
In passing, document the variable VERSION, which had somehow escaped that.
And improve the quite-inadequate commenting in psql/variables.c.
Daniel Vérité, reviewed by Rahila Syed, some further tweaking by me
Discussion: https://postgr.es/m/7356e741-fa59-4146-a8eb-cf95fd6b21fb@mm
DST law changes in northern Cyprus (new zone Asia/Famagusta), Russia (new
zone Europe/Saratov), Tonga, Antarctica/Casey. Historical corrections for
Asia/Aqtau, Asia/Atyrau, Asia/Gaza, Asia/Hebron, Italy, Malta. Replace
invented zone abbreviation "TOT" for Tonga with numeric UTC offset; but
as in the past, we'll keep accepting "TOT" for input.
In commit 6c268df, pg_init_privs was added to track the initial
privileges of catalog objects and extensions. Unfortunately, that
commit didn't include understanding of ALTER EXTENSION ADD/DROP, which
allows the objects associated with an extension to be changed after the
initial CREATE EXTENSION script has been run.
The result of this meant that ACLs for objects added through
ALTER EXTENSION ADD were not recorded into pg_init_privs and we would
end up including those ACLs in pg_dump when we shouldn't have.
This commit corrects that by making sure to have pg_init_privs updated
when ALTER EXTENSION ADD/DROP is run, recording the permissions as they
are at ALTER EXTENSION ADD time, and removing any if/when ALTER
EXTENSION DROP is called.
This issue was pointed out by Moshe Jacobson as commentary on bug #14456
(which was actually a bug about versions prior to 9.6 not handling
custom ACLs on extensions correctly, an issue now addressed with
pg_init_privs in 9.6).
Back-patch to 9.6 where pg_init_privs was introduced.
The formatting of the perl hashes used in the TAP tests for test_pg_dump
was rather horribly inconsistent and made it more difficult than it
really should have been to add new tests or adjust what tests are for
what runs, etc.
Reformat to clean that all up.
Whitespace-only changes.
Currently, we only need this logic in order to cost a Bitmap Heap
Scan. But a pending patch for Parallel Bitmap Heap Scan also uses
it to help figure out how many workers to use for the scan, which
has to be determined prior to costing. So, move the logic to
a separate function to make that easier.
Dilip Kumar. The patch series of which this is a part has been
reviewed by Andres Freund, Amit Khendekar, Tushar Ahuja, Rafia
Sabih, Haribabu Kommi, and me; it is not clear from the email
discussion which of those people have looked specifically at this
part.
Discussion: http://postgr.es/m/CAFiTN-v3QYNJEZnnmKCeATuLbN-h9tMVfeEF0+BrouYDqjXgwg@mail.gmail.com
tokenize_file() now returns a single list of TokenizedLine structs,
carrying the same information as before. We were otherwise going to grow a
fourth list to deal with error messages, and that was getting a bit silly.
Haribabu Kommi, revised a bit by me
Discussion: https://postgr.es/m/CAJrrPGfbgbKsjYp=bgZXhMcgxoaGSoBb9fyjrDoOW_YymXv1Kw@mail.gmail.com
When I wrote commit ab1f0c822, I really missed the castNode() macro that
Peter E. had proposed shortly before. This back-fills the uses I would
have put it to. It's probably not all that significant, but there are
more assertions here than there were before, and conceivably they will
help catch any bugs associated with those representation changes.
I left behind a number of usages like "(Query *) copyObject(query_var)".
Those could have been converted as well, but Peter has proposed another
notational improvement that would handle copyObject cases automatically,
so I let that be for now.
The new function allows to cast from one NodeTag based type to
another, while asserting that the conversion is valid. This replaces
the common pattern of doing a cast and a Assert(IsA(ptr, type))
close-by.
As this seems likely to be used pervasively, we decided to backpatch
this change the addition of this macro. Otherwise backpatched fixes
are more likely not to work on back-branches.
On branches before 9.6, where we do not yet rely on inline functions
being available, the type assertion is only performed if PG_USE_INLINE
support is detected. The cast obviously is performed regardless.
For the benefit of verifying the macro compiles in the back-branches,
this commit contains a single use of the new macro. On master, a
somewhat larger conversion will be committed separately.
Author: Peter Eisentraut and Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/c5d387d9-3440-f5e0-f9d4-71d53b9fbe52@2ndquadrant.com
Backpatch: 9.2-
Our current DDL only allows a database name to be specified in COMMENT
ON DATABASE, which Andrew Dunstan reports to make this test fail on the
buildfarm. Remove the line until we gain a DDL command that allows the
current database to be operated on without having the specify it by
name.
Backpatch to 9.5, where these tests appeared.
Discussion: https://postgr.es/m/e6084b89-07a7-7e57-51ee-d7b8fc9ec864@2ndQuadrant.com
We maintained two separate expected files because log_cnt could be one
of two values. Rewrite the test so that we only need one file.
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Hot_standby_feedback could be reset by reload and worked correctly, but if
the server was restarted rather than reloaded the xmin was not reset.
Force reset always if hot_standby_feedback is enabled at startup.
Ants Aasma, Craig Ringer
Reported-by: Ants Aasma
!foo means "the tsvector does not contain foo", and therefore it should
match an empty tsvector. ts_match_vq() overenthusiastically supposed
that an empty tsvector could never match any query, so it forcibly
returned FALSE, the wrong answer. Remove the premature optimization.
Our behavior on this point was inconsistent, because while seqscans and
GIST index searches both failed to match empty tsvectors, GIN index
searches would find them, since GIN scans don't rely on ts_match_vq().
That makes this certainly a bug, not a debatable definition disagreement,
so back-patch to all supported branches.
Report and diagnosis by Tom Dunstan (bug #14515); added test cases by me.
Discussion: https://postgr.es/m/20170126025524.1434.97828@wrigleys.postgresql.org
This improves readability a bit and may make future improvements easier.
In passing, make sure that the JB_ROOT_IS_XXX macros deliver boolean (0/1)
results; the previous coding was a bug hazard, though no actual bugs are
known.
Nikita Glukhov, extended a bit by me
Discussion: https://postgr.es/m/9e21a39c-c1d7-b9b5-44a0-c5345a5029f6@postgrespro.ru
There's really no situation where we don't want these unknown-to-text
conversions to happen. The alternative is failure anyway, and the one
caller that was passing "false" did so only because it expected the
case could not arise. Might as well simplify the code.
Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
Previously, type "unknown" was labeled as a base type in pg_type, which
perhaps had some sense to it because you were allowed to create tables with
unknown-type columns. But now that we don't allow that, it makes more
sense to label it a pseudo-type. This has the additional effects of
forbidding use of "unknown" as a domain base type, cast source or target
type, PL function argument or result type, or plpgsql local variable type;
all of which seem like good holes to plug.
Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
Previously, we left such literals alone if the query or subquery had
no properties forcing a type decision to be made (such as an ORDER BY or
DISTINCT clause using that output column). This meant that "unknown" could
be an exposed output column type, which has never been a great idea because
it could result in strange failures later on. For example, an outer query
that tried to do any operations on an unknown-type subquery output would
generally fail with some weird error like "failed to find conversion
function from unknown to text" or "could not determine which collation to
use for string comparison". Also, if the case occurred in a CREATE VIEW's
query then the view would have an unknown-type column, causing similar
failures in queries trying to use the view.
To fix, at the tail end of parse analysis of a query, forcibly convert any
remaining "unknown" literals in its SELECT or RETURNING list to type text.
However, provide a switch to suppress that, and use it in the cases of
SELECT inside a set operation or INSERT command. In those cases we already
had type resolution rules that make use of context information from outside
the subquery proper, and we don't want to change that behavior.
Also, change creation of an unknown-type column in a relation from a
warning to a hard error. The error should be unreachable now in CREATE
VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown"
in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because
it's nothing but a foot-gun.
This change creates a pg_upgrade failure case: a matview that contains an
unknown-type column can't be pg_upgraded, because reparsing the matview's
defining query will now decide that the column is of type text, which
doesn't match the cstring-like storage that the old materialized column
would actually have. Add a checking pass to detect that. While at it,
we can detect tables or composite types that would fail, essentially
for free. Those would fail safely anyway later on, but we might as
well fail earlier.
This patch is by me, but it owes something to previous investigations
by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for
review.
Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
According to the comments in tupconvert.c, it's necessary to perform
tuple conversion when either table has OIDs, and this was previously
checked by ensuring that the tdtypeid value matched between the tables
in question. However, that's overly stringent: we have access to
tdhasoid and can test directly whether OIDs are present, which lets us
avoid conversion in cases where the type OIDs are different but the
tuple descriptors are entirely the same (and neither has OIDs). This
is useful to the partitioning code, which can thereby avoid converting
tuples when inserting into a partition whose columns appear in the
same order as the parent columns, the normal case. It's possible
for the tuple routing code to avoid some additional overhead in this
case as well, so do that, too.
It's not clear whether it would be OK to skip this when both tables
have OIDs: do callers count on this to build a new tuple (losing the
previous OID) in such instances? Until we figure it out, leave the
behavior in that case alone.
Amit Langote, reviewed by me.
Commit 587cda35c added a test to updatable_views.sql that created
tables named the same as tables used by the concurrent inherit.sql
script. Unsurprisingly, this results in random failures.
Pick different names.
Per buildfarm.
In the new code for selecting sequence data from pg_sequence, set the
schema to pg_catalog instead of the sequences own schema, and refer to
the sequence by OID instead of name, which was missing a schema
qualification.
Reported-by: Stephen Frost <sfrost@snowman.net>
Formerly an alternate password file could only be selected via the
environment variable PGPASSFILE; now it can also be selected via a
new connection parameter "passfile", corresponding to the conventions
for most other connection parameters. There was some concern about
this creating a security weakness, but it was agreed that that argument
was pretty thin, and there are clear use-cases for handling password
files this way.
Julian Markwort, reviewed by Fabien Coelho, some adjustments by me
Discussion: https://postgr.es/m/a4b4f4f1-7b58-a0e8-5268-5f7db8e8ccaa@uni-muenster.de
This is useful infrastructure for an upcoming proposed patch to
allow the WAL segment size to be changed at initdb time; tools like
pg_basebackup need the ability to interrogate the server setting.
But it also doesn't seem like a bad thing to have independently of
that; it may find other uses in the future.
Robert Haas and Beena Emerson. (The original patch here was by
Beena, but I rewrote it to such a degree that most of the code
being committed here is mine.)
Discussion: http://postgr.es/m/CA+TgmobNo4qz06wHEmy9DszAre3dYx-WNhHSCbU9SAwf+9Ft6g@mail.gmail.com
If you create a DestReciver of type DestRemote and try to use it from
a replication connection that is not bound to a specific daabase, or
any other hypothetical type of backend that is not bound to a specific
database, it will fail because it doesn't have a pg_proc catalog to
look up properties of the types being printed. In general, that's
an unavoidable problem, but we can hardwire the properties of a few
builtin types in order to support utility commands. This new
DestReceiver of type DestRemoteSimple does just that.
Patch by me, reviewed by Michael Paquier.
Discussion: http://postgr.es/m/CA+TgmobNo4qz06wHEmy9DszAre3dYx-WNhHSCbU9SAwf+9Ft6g@mail.gmail.com
This patch doesn't actually make any index AM parallel-aware, but it
provides the necessary functions at the AM layer to do so.
Rahila Syed, Amit Kapila, Robert Haas
Previously, ExecInitModifyTable was missing handling for WITH CHECK
OPTION, and view_query_is_auto_updatable was missing handling for
RELKIND_PARTITIONED_TABLE.
Amit Langote, reviewed by me.
In 2ac3ef7a01, we changed things so that
it's possible for a different TupleTableSlot to be used for partitioned
tables at successively lower levels. If we do end up changing the slot
from the original, we must update ecxt_scantuple to point to the new one
for partition key of the tuple to be computed correctly.
Reported by Rajkumar Raghuwanshi. Patch by Amit Langote.
Discussion: http://postgr.es/m/CAKcux6%3Dm1qyqB2k6cjniuMMrYXb75O-MB4qGQMu8zg-iGGLjDw%40mail.gmail.com
We've accumulated quite a bit of stuff with which pgindent is not
quite happy in this code; clean it up to provide a less-annoying base
for future pgindent runs.
The code here previously tried to call the partitioning operator, but
really the right thing to do (and the safe thing to do) is use
datumIsEqual().
Amit Langote, but I expanded the comment and fixed a compiler warning.
For some reason that is lost in history, a descending sequence would
default its minimum value to -2^63+1 (-PG_INT64_MAX) instead of
-2^63 (PG_INT64_MIN), even though explicitly specifying a minimum value
of -2^63 would work. Fix this inconsistency by using the full range by
default.
Reported-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
initdb used to warn about that, but it was changed to an error in
pg_import_system_locales, but some build farm members failed because of
that. Change it back to a warning.
Vacuum truncation scan can be sped up on rotating media by prefetching
blocks in forward direction. That makes the blocks already present in
memory by the time they are needed, while also letting OS read-ahead
kick in.
The truncate scan has been measured to be five times faster than without
this patch (that was on a slow disk, but it shouldn't hurt on fast
disks.)
Author: Álvaro Herrera, loosely based on a submission by Claudio Freire
Discussion: https://postgr.es/m/CAGTBQpa6NFGO_6g_y_7zQx8L9GcHDSQKYdo1tGuh791z6PYgEg@mail.gmail.com
Joining three tables only takes two join nodes. I think when I (tgl)
wrote this, I was envisioning possible additional joins; but since the
example doesn't show any fourth table, it's just confusing to write
a third join node.
Etsuro Fujita
Discussion: https://postgr.es/m/e6cfbaa3-af02-1abc-c25e-8fa5c6bc4e21@lab.ntt.co.jp
This appears to be necessary to fix a failure seen on buildfarm member
sittella. It shouldn't be necessary according to the letter of the C
standard, because we don't change the values of these variables within
the PG_TRY blocks; but somehow gcc 4.7.2 is dropping the ball.
Discussion: https://postgr.es/m/17555.1485179975@sss.pgh.pa.us
Missing a destroyPQExpBuffer() in the early exit branch. The early
exits aren't really necessary. Most similar functions just proceed
running the rest of the code zero times and clean up at the end.
Project style is to put things in this order, for the good and sufficient
reason that you often need the typedefs in the function declarations.
There already was one function declaration that needed a typedef, which
was randomly placed away from all the other static function declarations
in consequence. And the submitted patch for better json_populate_record
functionality jumped through even more hoops in order to preserve this
bad idea.
This patch only moves lines from point A to point B, no other changes.
Coverity complained quite properly that commit ea15e1867 had introduced
unreachable code into ExecGather(); to wit, it was no longer possible to
iterate the final for-loop more or less than once. So remove the for().
In passing, clean up a couple of comments, and make better use of a local
variable.
Turns out this has been broken for years and we'd not noticed. The one
case that was getting exercised in the buildfarm, or probably anywhere
else, was postgres_fdw.sl's reference to libpq.sl; and it turns out that
that was always going to libpq.sl in the actual installation directory
not the temporary install. We'd not noticed because the buildfarm script
does "make install" before it tests contrib. However, the recent addition
of a logical-replication test to the core regression scripts resulted in
trying to use libpqwalreceiver.sl before "make install" happens, and that
failed for lack of finding libpq.sl, as shown by failures on buildfarm
members gaur and pademelon.
There are two changes needed to fix it: the magic environment variable to
specify shlib search path at runtime is SHLIB_PATH not LD_LIBRARY_PATH,
and the shlib link command needs to specify the +s switch else the library
will not honor SHLIB_PATH.
I'm not quite sure why buildfarm members anole and gharial (HPUX 11) didn't
show the same failure. Consulting man pages on the web says that HPUX 11
honors both LD_LIBRARY_PATH and SHLIB_PATH, which would explain half of it,
and the rather confusing wording I've been able to find suggests that +s
might effectively be the default in HPUX 11. But it seems at least as
likely that there's just a libpq.so installed in /usr/lib on that machine;
as long as it's not too ancient, that would satisfy the test. In any case
I do not think this patch will break HPUX 11.
At the moment I don't see a need to back-patch this, since it only matters
for testing purposes, not to mention that HPUX 10 is probably dead in the
real world anyway.
When (1) autovacuum = off and (2) there's at least one database with
an XID age greater than autovacuum_freeze_max_age and (3) all tables
in that database that need vacuuming are already being processed by a
worker and (4) the autovacuum launcher is started, a kind of infinite
loop occurs. The launcher starts a worker and immediately exits. The
worker, finding no worker to do, immediately starts the launcher,
supposedly so that the next database can be processed. But because
datfrozenxid for that database hasn't been advanced yet, the new
worker gets put right back into the same database as the old one,
where it once again starts the launcher and exits. High-speed ping
pong ensues.
There are several possible ways to break the cycle; this seems like
the safest one.
Amit Khandekar (code) and Robert Haas (comments), reviewed by
Álvaro Herrera.
Discussion: http://postgr.es/m/CAJ3gD9eWejf72HKquKSzax0r+epS=nAbQKNnykkMA0E8c+rMDg@mail.gmail.com
If either bound is infinite, then we shouldn't even try to perform a
comparison of the values themselves. Rearrange the logic so that
we don't.
Per buildfarm member skink and Tom Lane.
This was forgotten in 665d1fad99 and
caused the whole buildfarm to become red for a little while.
Author: Petr Jelínek
Also fix a typo in a nearby error message.
pgoutput evidently needs to be built without -DBUILDING_DLL. (It seems
like a pretty bad idea that these makefiles need to know exactly where
all the shlibs are in the tree, or maybe what's bad is putting them under
src/backend/. But right now is not the time to redesign that.)
Also, remove "override CPPFLAGS" in pgoutput's Makefile. I don't think
that that actually has any bad consequences, but it's certainly useless
in a directory that has no .h files, and it might be contributing to the
failure somehow.
Per buildfarm.
A pgbench meta command can now be continued onto additional line(s) of a
script file by writing backslash-return. The continuation marker is
equivalent to white space in that it separates tokens.
Eventually it'd be nice to have the same thing in psql, but that will
be a much larger project.
Fabien Coelho, reviewed by Rafia Sabih
Discussion: https://postgr.es/m/alpine.DEB.2.20.1610031049310.19411@lancre
The publication test didn't drop all the publications it was creating
when it was probably intending to do that. There is still a bug with
dependency tracking in there, but this should at least quiet down the
build farm.
Brown-paper-bag bug in commit ab1f0c822: the old code here coped with
null CachedPlanSource.raw_parse_tree, the new code not so much.
Per report from Dave Cramer.
No regression test, because our core testing infrastructure doesn't
provide any easy way to exercise this path. Fortunately, the JDBC
crew test it regularly.
Discussion: https://postgr.es/m/CADK3HH+Ug3xCysKqw_dZOnaNnytZ1Rh5yP05hjO-e4NoyRxVvA@mail.gmail.com
I'd somehow talked myself into believing that set_append_rel_size
doesn't need to worry about getting back an AND clause when it applies
eval_const_expressions to the result of adjust_appendrel_attrs (that is,
transposing the appendrel parent's restriction clauses for one child).
But that is nonsense, and Andreas Seltenreich's fuzz tester soon
turned up a counterexample. Put back the make_ands_implicit step
that was there before, and add a regression test covering the case.
Report: https://postgr.es/m/878tq6vja6.fsf@ansel.ydns.eu
Due to the changed costing in that commit hash-aggregates started to
be used, which results in big-endian vs. little-endian output
differences. Disable hash-aggs for those tests.
Author: Andres Freund, with input from Tom Lane
Discussion: https://postgr.es/m/22891.1484791792@sss.pgh.pa.us
Since 69f4b9c plain expression evaluation (and thus normal projection)
can't return sets of tuples anymore. Thus remove code dealing with
that possibility.
This will require adjustments in external code using
ExecEvalExpr()/ExecProject() - that should neither be hard nor very
common.
Author: Andres Freund and Tom Lane
Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de