specscanner.l had a fixed limit of 1024 bytes on the length of
individual SQL stanzas in an isolation test script. People are
starting to run into that, so fix it by making the buffer resizable.
Once we allow this in HEAD, it seems inevitable that somebody will
try to back-patch a test that exceeds the old limit, so back-patch
this change as a preventive measure.
Daniel Gustafsson
Discussion: https://postgr.es/m/8D628BE4-6606-4FF6-A3FF-8B2B0E9B43D0@yesql.se
The previous code considered two tables to have the partition scheme
if the underlying columns had the same collation, but what we
actually need to compare is not the collations associated with the
column but the collation used for partitioning. Fix that.
Robert Haas and Amit Langote
Discussion: http://postgr.es/m/0f95f924-0efa-4cf5-eb5f-9a3d1bc3c33d@lab.ntt.co.jp
Parallel-aware plan nodes must be prepared to run without parallelism
if it's not possible at execution time for whatever reason. Commit
ab72716778, which introduced Parallel
Append, overlooked this.
Rajkumar Raghuwanshi reported this problem, and I included his test
case in this patch. The code changes are by me.
Discussion: http://postgr.es/m/CAKcux6=WqkUudLg1GLZZ7fc5ScWC1+Y9qD=pAHeqy32WoeJQvw@mail.gmail.com
Although configure-based builds correctly define HAVE_LONG_LONG_INT when
appropriate (in both pg_config.h and ecpg_config.h), builds using the MSVC
scripts failed to do so. This currently has no impact on the backend,
since it uses that symbol nowhere; but it does prevent ecpg from
supporting "long long int". Fix that.
Also, adjust Solution.pm so that in the constructed ecpg_config.h file,
the "#if (_MSC_VER > 1200)" covers only the LONG_LONG_INT-related
#defines, not the whole file. AFAICS this was a thinko on somebody's
part: ENABLE_THREAD_SAFETY should always be defined in Windows builds,
and in branches using USE_INTEGER_DATETIMES, the setting of that shouldn't
depend on the compiler version either. If I'm wrong, I imagine the
buildfarm will say so.
Per bug #15080 from Jonathan Allen; issue diagnosed by Michael Meskes
and Andrew Gierth. Back-patch to all supported branches.
Discussion: https://postgr.es/m/151935568942.1461.14623890240535309745@wrigleys.postgresql.org
Tom Kazimiers reported that transition tables don't work correctly when
they are scanned by more than one executor node. That's because commit
18ce3a4ab allocated separate read pointers for each executor node, as it
must, but failed to make them active at the appropriate times. Repair.
Thomas Munro
Discussion: https://postgr.es/m/20180224034748.bixarv6632vbxgeb%40dewberry.localdomain
A before-update row trigger may choose to return the "new" or "old" tuple
unmodified. ExecBRUpdateTriggers failed to consider the second
possibility, and would proceed to free the "old" tuple even if it was the
one returned, leading to subsequent access to already-deallocated memory.
In debug builds this reliably leads to an "invalid memory alloc request
size" failure; in production builds it might accidentally work, but data
corruption is also possible.
This is a very old bug. There are probably a couple of reasons it hasn't
been noticed up to now. It would be more usual to return NULL if one
wanted to suppress the update action; returning "old" is significantly less
efficient since the update will occur anyway. Also, none of the standard
PLs would ever cause this because they all returned freshly-manufactured
tuples even if they were just copying "old". But commit 4b93f5799 changed
that for plpgsql, making it possible to see the bug with a plpgsql trigger.
Still, this is certainly legal behavior for a trigger function, so it's
ExecBRUpdateTriggers's fault not plpgsql's.
It seems worth creating a test case that exercises returning "old" directly
with a C-language trigger; testing this through plpgsql seems unreliable
because its behavior might change again.
Report and fix by Rushabh Lathia; regression test case by me.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAGPqQf1P4pjiNPrMof=P_16E-DFjt457j+nH2ex3=nBTew7tXw@mail.gmail.com
Commit 3bf05e096b sometimes uses the
cheapest_partial_path variable in this function to mean the cheapest
one from the input rel and at other times the cheapest one from the
partially grouped rel, but it never resets it, so we can end up with
bad plans, leading to "ERROR: Aggref found in non-Agg plan node".
Jeevan Chalke, per a report from Andreas Joseph Krogh and a separate
off-list report from Rajkumar Raghuwanshi
Discussion: http://postgr.es/m/CAM2+6=X9kxQoL2ZqZ00E6asBt9z+rfyWbOmhXJ0+8fPAyMZ9Jg@mail.gmail.com
It's a bit silly to have test functions that aren't tested, so test
them.
In passing, rename int44in/int44out to city_budget_in/_out so that they
match how the regression tests use them. Also, fix city_budget_out
so that it emits the format city_budget_in expects to read; otherwise
we'd have dump/reload failures when testing pg_dump against the
regression database. (We avoided that in the past only because no
data of type city_budget was actually stored anywhere.)
Discussion: https://postgr.es/m/29322.1519701006@sss.pgh.pa.us
This patch removes five functions that presumably were once used in the
regression tests, but haven't been so used in many years. Nonetheless
we've been wasting maintenance effort on them (e.g., by converting them
to V1 function protocol). I see no reason to think that reviving them
would add any useful test coverage, so drop 'em.
In passing, mark regress_lseg_construct static, since it's not called
from outside this file.
Discussion: https://postgr.es/m/29322.1519701006@sss.pgh.pa.us
The ability to create like-named objects in different schemas opens up
the potential for users to change the behavior of other users' queries,
maliciously or accidentally. When you connect to a PostgreSQL server,
you should remove from your search_path any schema for which a user
other than yourself or superusers holds the CREATE privilege. If you do
not, other users holding CREATE privilege can redefine the behavior of
your commands, causing them to perform arbitrary SQL statements under
your identity. "SET search_path = ..." and "SELECT
pg_catalog.set_config(...)" are not vulnerable to such hijacking, so one
can use either as the first command of a session. As special
exceptions, the following client applications behave as documented
regardless of search_path settings and schema privileges: clusterdb
createdb createlang createuser dropdb droplang dropuser ecpg (not
programs it generates) initdb oid2name pg_archivecleanup pg_basebackup
pg_config pg_controldata pg_ctl pg_dump pg_dumpall pg_isready
pg_receivewal pg_recvlogical pg_resetwal pg_restore pg_rewind pg_standby
pg_test_fsync pg_test_timing pg_upgrade pg_waldump reindexdb vacuumdb
vacuumlo. Not included are core client programs that run user-specified
SQL commands, namely psql and pgbench. PostgreSQL encourages non-core
client applications to do likewise.
Document this in the context of libpq connections, psql connections,
dblink connections, ECPG connections, extension packaging, and schema
usage patterns. The principal defense for applications is "SELECT
pg_catalog.set_config('search_path', '', false)", and the principal
defense for databases is "REVOKE CREATE ON SCHEMA public FROM PUBLIC".
Either one is sufficient to prevent attack. After a REVOKE, consider
auditing the public schema for objects named like pg_catalog objects.
Authors of SECURITY DEFINER functions use some of the same defenses, and
the CREATE FUNCTION reference page already covered them thoroughly.
This is a good opportunity to audit SECURITY DEFINER functions for
robust security practice.
Back-patch to 9.3 (all supported versions).
Reviewed by Michael Paquier and Jonathan S. Katz. Reported by Arseniy
Sharoglazov.
Security: CVE-2018-1058
This makes the client programs behave as documented regardless of the
connect-time search_path and regardless of user-created objects. Today,
a malicious user with CREATE permission on a search_path schema can take
control of certain of these clients' queries and invoke arbitrary SQL
functions under the client identity, often a superuser. This is
exploitable in the default configuration, where all users have CREATE
privilege on schema "public".
This changes behavior of user-defined code stored in the database, like
pg_index.indexprs and pg_extension_config_dump(). If they reach code
bearing unqualified names, "does not exist" or "no schema has been
selected to create in" errors might appear. Users may fix such errors
by schema-qualifying affected names. After upgrading, consider watching
server logs for these errors.
The --table arguments of src/bin/scripts clients have been lax; for
example, "vacuumdb -Zt pg_am\;CHECKPOINT" performed a checkpoint. That
now fails, but for now, "vacuumdb -Zt 'pg_am(amname);CHECKPOINT'" still
performs a checkpoint.
Back-patch to 9.3 (all supported versions).
Reviewed by Tom Lane, though this fix strategy was not his first choice.
Reported by Arseniy Sharoglazov.
Security: CVE-2018-1058
Historically, pg_dump has "set search_path = foo, pg_catalog" when
dumping an object in schema "foo", and has also caused that setting
to be used while restoring the object. This is problematic because
functions and operators in schema "foo" could capture references meant
to refer to pg_catalog entries, both in the queries issued by pg_dump
and those issued during the subsequent restore run. That could
result in dump/restore misbehavior, or in privilege escalation if a
nefarious user installs trojan-horse functions or operators.
This patch changes pg_dump so that it does not change the search_path
dynamically. The emitted restore script sets the search_path to what
was used at dump time, and then leaves it alone thereafter. Created
objects are placed in the correct schema, regardless of the active
search_path, by dint of schema-qualifying their names in the CREATE
commands, as well as in subsequent ALTER and ALTER-like commands.
Since this change requires a change in the behavior of pg_restore
when processing an archive file made according to this new convention,
bump the archive file version number; old versions of pg_restore will
therefore refuse to process files made with new versions of pg_dump.
Security: CVE-2018-1058
Up until now, we've abused grouped_rel->partial_pathlist as a place to
store partial paths that have been partially aggregate, but that's
really not correct, because a partial path for a relation is supposed
to be one which produces the correct results with the addition of only
a Gather or Gather Merge node, and these paths also require a Finalize
Aggregate step. Instead, add a new partially_group_rel which can hold
either partial paths (which need to be gathered and then have
aggregation finalized) or non-partial paths (which only need to have
aggregation finalized). This allows us to reuse generate_gather_paths
for partially_grouped_rel instead of writing new code, so that this
patch actually basically no net new code while making things cleaner,
simplifying things for pending patches for partition-wise aggregate.
Robert Haas and Jeevan Chalke. The larger patch series of which this
patch is a part was also reviewed and tested by Antonin Houska,
Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik,
Pascal Legrand, Rafia Sabih, and me.
Discussion: http://postgr.es/m/CA+TgmobrzFYS3+U8a_BCy3-hOvh5UyJbC18rEcYehxhpw5=ETA@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmoZyQEjdBNuoG9-wC5GQ5GrO4544Myo13dVptvx+uLg9uQ@mail.gmail.com
Commit b3f840120 changed pg_upgrade so that it'd actually drop and
re-create the template1 and postgres databases in the new cluster.
That works fine, serially. With the -j option it's not so fine, because
other per-database jobs might be launched while the template1 database is
dropped. Since they attempt to connect there to start up, kaboom.
This is the cause of the intermittent failures buildfarm member jacana
has been showing for the last month; evidently it is the only BF member
configured to run the pg_upgrade test with parallelism enabled.
Fix by processing template1 separately before we get into the parallel
sub-job launch loop. (We could alternatively have made the postgres DB
be the special case, but it seems likely that template1 will contain
less stuff and so we lose less parallelism with this choice.)
Recent Perl versions don't have the current directory in the module
include path anymore, so we need to add it here explicitly to make these
scripts continue to work.
Commit 0a459cec9 left this for later, but since time's running out,
I went ahead and took care of it. There are more data types that
somebody might someday want RANGE support for, but this is enough
to satisfy all expectations of the SQL standard, which just says that
"numeric, datetime, and interval" types should have RANGE support.
In tests that check whether a connection fails, also check the error
message. That makes sure that the connection was rejected for the right
reason.
This discovered that two tests had their connection failing for the
wrong reason. One test failed because pg_hba.conf was not set up to
allow that user, one test failed because the client key file did not
have the right permissions. Fix those tests and add a new one that is
really supposed to check the file permission issue.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
In the pgoutput plugin, skip changes for relations that are not
publishable, per is_publishable_class(). This concerns in particular
materialized views and information_schema tables. While those relations
cannot be part of a publication, per existing checks, they will be
considered by a FOR ALL TABLES publication. A subscription would not
actually apply changes for those relations, again per existing checks,
but trying to match incoming changes to local tables on the subscriber
would lead to errors if no matching local table exists. Skipping those
changes on the publisher avoids sending useless changes and eliminates
the error.
Bug: #15044
Reported-by: Chad Trabant <chad@iris.washington.edu>
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
This is mostly cosmetic, but it might fix build failures, on some
platform, when copying from the documentation.
Back-patch to 9.3 (all supported versions).
Given overlapping or partially redundant join clauses, for example
t1 JOIN t2 ON t1.a = t2.x AND t1.b = t2.x
the planner's EquivalenceClass machinery will ordinarily refactor the
clauses as "t1.a = t1.b AND t1.a = t2.x", so that join processing doesn't
see multiple references to the same EquivalenceClass in a list of join
equality clauses. However, if the join is outer, it's incorrect to derive
a restriction clause on the outer side from the join conditions, so the
clause refactoring does not happen and we end up with overlapping join
conditions. The code that attempted to deal with such cases had several
subtle bugs, which could result in "left and right pathkeys do not match in
mergejoin" or "outer pathkeys do not match mergeclauses" planner errors,
if the selected join plan type was a mergejoin. (It does not appear that
any actually incorrect plan could have been emitted.)
The core of the problem really was failure to recognize that the outer and
inner relations' pathkeys have different relationships to the mergeclause
list. A join's mergeclause list is constructed by reference to the outer
pathkeys, so it will always be ordered the same as the outer pathkeys, but
this cannot be presumed true for the inner pathkeys. If the inner sides of
the mergeclauses contain multiple references to the same EquivalenceClass
({t2.x} in the above example) then a simplistic rendering of the required
inner sort order is like "ORDER BY t2.x, t2.x", but the pathkey machinery
recognizes that the second sort column is redundant and throws it away.
The mergejoin planning code failed to account for that behavior properly.
One error was to try to generate cut-down versions of the mergeclause list
from cut-down versions of the inner pathkeys in the same way as the initial
construction of the mergeclause list from the outer pathkeys was done; this
could lead to choosing a mergeclause list that fails to match the outer
pathkeys. The other problem was that the pathkey cross-checking code in
create_mergejoin_plan treated the inner and outer pathkey lists
identically, whereas actually the expectations for them must be different.
That led to false "pathkeys do not match" failures in some cases, and in
principle could have led to failure to detect bogus plans in other cases,
though there is no indication that such bogus plans could be generated.
Reported by Alexander Kuzmenkov, who also reviewed this patch. This has
been broken for years (back to around 8.3 according to my testing), so
back-patch to all supported branches.
Discussion: https://postgr.es/m/5dad9160-4632-0e47-e120-8e2082000c01@postgrespro.ru
Similar to what commit b022923556 for a
different set of functions, pass the required bits of the PartitionKey
instead of the whole thing. This allows these functions to be used
without needing the PartitionKey to be available.
Amit Langote. The larger patch series of which this patch is a part
has been reviewed and tested by Ashutosh Bapat, David Rowley, Dilip
Kumar, Jesper Pedersen, Rajkumar Raghuwanshi, Beena Emerson, Kyotaro
Horiguchi, Álvaro Herrera, and me, but especially and in great detail
by David Rowley.
Discussion: http://postgr.es/m/098b9c71-1915-1a2a-8d52-1a7a50ce79e8@lab.ntt.co.jp
Discussion: http://postgr.es/m/1f6498e8-377f-d077-e791-5dc84dba2c00@lab.ntt.co.jp
To support parameters in CALL, move the parse analysis of the procedure
and arguments into the global transformation phase, so that the parser
hooks can be applied. And then at execution time pass the parameters
from ProcessUtility on to ExecuteCallStmt.
Add the user-callable functions sha224, sha256, sha384, sha512. We
already had these in the C code to support SCRAM, but there was no test
coverage outside of the SCRAM tests. Adding these as user-callable
functions allows writing some tests. Also, we have a user-callable md5
function but no more modern alternative, which led to wide use of md5 as
a general-purpose hash function, which leads to occasional complaints
about using md5.
Also mark the existing md5 functions as leak-proof.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
It's not necessary to fully initialize the executor data structures
for partitions to which no tuples are ever routed. Consider, for
example, an INSERT statement that inserts only one row: it only cares
about the partition to which that one row is routed. The new function
ExecInitPartitionInfo performs the initialization in question only
when a particular partition is about to receive a tuple. This includes
creating, validating, and saving a pointer to the ResultRelInfo,
setting up for speculative insertions, translating WCOs and
initializing the resulting expressions, translating returning lists
and building the appropriate projection information, and setting up a
tuple conversion map.
One thing that's not deferred is locking the child partitions; that
seems desirable but would need more thought. Still, testing shows
that this makes single-row inserts significantly faster on a table
with many partitions without harming the bulk-insert case.
Amit Langote, reviewed by Etsuro Fujita, with a few changes by me
Discussion: http://postgr.es/m/8975331d-d961-cbdd-f862-fdd3d97dc2d0@lab.ntt.co.jp
Commit 7d8ac9814b adjusted these
tests in the hope of preserving the plan shape, but I failed to
notice that the three partitions were, on my local machine, choosing
two different plan shapes. This is probably related to the fact
that all three tables have exactly the same row count. Try to
improve the situation by making pht1_e about half as large as
the other two.
Per Tom Lane and the buildfarm.
Discussion: http://postgr.es/m/25380.1519277713@sss.pgh.pa.us
Previously, Append didn't charge anything at all, and MergeAppend
charged only cpu_operator_cost, about half the value used here. This
change might make MergeAppend plans slightly more likely to be chosen
than before, since this commit increases the assumed cost for Append
-- with default values -- by 0.005 per tuple but MergeAppend by only
0.0025 per tuple. Since the comparisons required by MergeAppend are
costed separately, it's not clear why MergeAppend needs to be
otherwise more expensive than Append, so hopefully this is OK.
Prior to partition-wise join, it didn't really matter whether or not
an Append node had any cost of its own, because every plan had to use
the same number of Append or MergeAppend nodes and in the same places.
Only the relative cost of Append vs. MergeAppend made a difference.
Now, however, it is possible to avoid some of the Append nodes using a
partition-wise join, so it's worth making an effort. Pending patches
for partition-wise aggregate care too, because an Append of Aggregate
nodes will incur the Append overhead fewer times than an Aggregate
over an Append. Although in most cases this change will favor the use
of partition-wise techniques, it does the opposite when the join
cardinality is greater than the sum of the input cardinalities. Since
this situation arises in an existing regression test, I [rhaas]
adjusted it to keep the overall plan shape approximately the same.
Jeevan Chalke, per a suggestion from David Rowley. Reviewed by
Ashutosh Bapat. Some changes by me. The larger patch series of which
this patch is a part was also reviewed and tested by Antonin Houska,
Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik,
Pascal Legrand, Rafia Sabih, and me.
Discussion: http://postgr.es/m/CAKJS1f9UXdk6ZYyqbJnjFO9a9hyHKGW7B=ZRh-rxy9qxfPA5Gw@mail.gmail.com
This oversight led to data corruption in matviews, manifesting as
"could not access status of transaction" before our most recent releases,
and "found xmin from before relfrozenxid" errors since then.
The proximate cause of the problem seems to have been confusion between
the task of preserving dropped-column status and the task of preserving
frozenxid status. Those are required for distinct sets of relkinds,
and the reasoning was entirely undocumented in the source code. In hopes
of forestalling future errors of the same kind, try to improve the
commentary in this area.
In passing, also improve the remarkably unhelpful comments around
pg_upgrade's set_frozenxids(). That's not actually buggy AFAICS,
but good luck figuring out what it does from the old comments.
Per report from Claudio Freire. It appears that bug #14852 from Alexey
Ermakov is an earlier report of the same issue, and there may be other
cases that we failed to identify at the time.
Patch by me based on analysis by Andres Freund. The bug dates back
to the introduction of matviews, so back-patch to all supported branches.
Discussion: https://postgr.es/m/CAGTBQpbrY9CdRGGhyBZ9yqY4jWaGC85rUF4X+R7d-aim=mBNsw@mail.gmail.com
Discussion: https://postgr.es/m/20171013115320.28049.86457@wrigleys.postgresql.org
Previously tts_off was, for unknown reasons, of type long. For one
that's unnecessary as tuples are restricted in length, for another
long would be a bad choice of type even if that weren't the case, as
it's not reliably wider than an int. Also HeapTupleHeader->t_len is a
uint32.
This is split off from a larger patch implementing JITed tuple
deforming. Seems like an independent improvement, as tiny as it is.
Author: Andres Freund
The previous coding here applied atoi() to strings that could represent
values too large to fit in an int. If the overflowed value happened to
match one of the cases it was looking for, it would drop that limit
value from the output, leading to incorrect restoration of the sequence.
Avoid the unsafe behavior, and also make the logic cleaner by explicitly
calculating the default min/max values for the appropriate kind of
sequence.
Reported and patched by Alexey Bashtanov, though I whacked his patch
around a bit. Back-patch to v10 where the faulty logic was added.
Discussion: https://postgr.es/m/cb85a9a5-946b-c7c4-9cf2-6cd6e25d7a33@imap.cc
An updating query that reads a CTE within an InitPlan or SubPlan could get
incorrect results if it updates rows that are concurrently being modified.
This is caused by CteScanNext supposing that nothing inside its recursive
ExecProcNode call could change which read pointer is selected in the CTE's
shared tuplestore. While that's normally true because of scoping
considerations, it can break down if an EPQ plan tree gets built during the
call, because EvalPlanQualStart builds execution trees for all subplans
whether they're going to be used during the recheck or not. And it seems
like a pretty shaky assumption anyway, so let's just reselect our own read
pointer here.
Per bug #14870 from Andrei Gorita. This has been broken since CTEs were
implemented, so back-patch to all supported branches.
Discussion: https://postgr.es/m/20171024155358.1471.82377@wrigleys.postgresql.org
If we restrict unique constraints on partitioned tables so that they
must always include the partition key, then our standard approach to
unique indexes already works --- each unique key is forced to exist
within a single partition, so enforcing the unique restriction in each
index individually is enough to have it enforced globally. Therefore we
can implement unique indexes on partitions by simply removing a few
restrictions (and adding others.)
Discussion: https://postgr.es/m/20171222212921.hi6hg6pem2w2t36z@alvherre.pgsql
Discussion: https://postgr.es/m/20171229230607.3iib6b62fn3uaf47@alvherre.pgsql
Reviewed-by: Simon Riggs, Jesper Pedersen, Peter Eisentraut, Jaime
Casanova, Amit Langote
In what was doubtless a typo, commit bf6c614a2 introduced a duplicate
initialization of a local variable. This made Coverity unhappy, as well
as pretty much anybody reading the code. We don't even have a real use
for the local variable, so just remove it.
Introduce a new format_type_extended, with a flags bitmask argument that
can modify the default behavior. A few compatibility and readability
wrappers remain:
format_type_be
format_type_be_qualified
format_type_with_typemod
while format_type_with_typemod_qualified, which had a single caller, is
removed.
Author: Michael Paquier, some revisions by me
Discussion: 20180213035107.GA2915@paquier.xyz
The reason for doing so is that it will allow expression evaluation to
optimize based on the underlying tupledesc. In particular it will
allow to JIT tuple deforming together with the expression itself.
For that expression initialization needs to be moved after the
relevant slots are initialized - mostly unproblematic, except in the
case of nodeWorktablescan.c.
After doing so there's no need for ExecAssignResultType() and
ExecAssignResultTypeFromTL() anymore, as all former callers have been
converted to create a slot with a fixed descriptor.
When creating a slot with a fixed descriptor, tts_values/isnull can be
allocated together with the main slot, reducing allocation overhead
and increasing cache density a bit.
Author: Andres Freund
Discussion: https://postgr.es/m/20171206093717.vqdxe5icqttpxs3p@alap3.anarazel.de
This has a performance benefit on own, although not hugely so. The
primary benefit is that it will allow for to JIT tuple deforming and
comparator invocations.
Large parts of this were previously committed (773aec7aa), but the
commit contained an omission around cross-type comparisons and was
thus reverted.
Author: Andres Freund
Discussion: https://postgr.es/m/20171129080934.amqqkke2zjtekd4t@alap3.anarazel.de
elog(FATAL) would end up calling PortalCleanup(), which would call
executor shutdown code, which could fail and crash, especially under
parallel query. This was introduced by
8561e4840c, which did not want to mark an
active portal as failed by a normal transaction abort anymore. But we
do need to do that for an elog(FATAL) exit. Introduce a variable
shmem_exit_inprogress similar to the existing proc_exit_inprogress, so
we can tell whether we are in the FATAL exit scenario.
Reported-by: Andres Freund <andres@anarazel.de>
Other header files should never #include postgres.h (nor postgres_fe.h,
nor c.h), per project policy. Also, there's no need for any backend .c
file to explicitly include elog.h or palloc.h, because postgres.h pulls
those in already.
Extracted from a larger patch by Kyotaro Horiguchi. The rest of the
removals he suggests require more study, but these are no-brainers.
Discussion: https://postgr.es/m/20180215.200447.209320006.horiguchi.kyotaro@lab.ntt.co.jp
This reverts commit 773aec7aa9.
There's an unresolved issue in the reverted commit: It only creates
one comparator function, but in for the nodeSubplan.c case we need
more (c.f. FindTupleHashEntry vs LookupTupleHashEntry calls in
nodeSubplan.c).
This isn't too difficult to fix, but it's not entirely trivial
either. The fact that the issue only causes breakage on 32bit systems
shows that the current test coverage isn't that great. To avoid
turning half the buildfarm red till those two issues are addressed,
revert.
If a plpgsql function is declared to return a domain type, and the domain's
constraints forbid a null value, it was nonetheless possible to return
NULL, because we didn't bother to check the constraints for a null result.
I'd noticed this while fooling with domains-over-composite, but had not
gotten around to fixing it immediately.
Add a regression test script exercising this and various other domain
cases, largely borrowed from the plpython_types test.
Although this is clearly a bug fix, I'm not sure whether anyone would
thank us for changing the behavior in stable branches, so I'm inclined
not to back-patch.
Seems a bit silly that many (in fact all, as of today) uses of
StaticAssertExpr would need to cast it to void to avoid warnings from
pickier compilers. Let's just do the cast right in the macro, instead.
In passing, change StaticAssertExpr to StaticAssertStmt in one
place where that seems more apropos.
Discussion: https://postgr.es/m/16161.1518715186@sss.pgh.pa.us
This is the logical conclusion of our decision to support Assert()
in both frontend and backend code: it should be possible to use that
after including just c.h. But as things were arranged before, if
you wanted to use Assert() in code that might be compiled for either
environment, you had to include postgres.h for the backend case.
Let's simplify that.
Per buildfarm, some of whose members started throwing warnings after
commit 0c62356cc added an Assert in src/port/snprintf.c.
It's possible that some other src/port files that use the stanza
#ifndef FRONTEND
#include "postgres.h"
#else
#include "postgres_fe.h"
#endif
could now be simplified to just say '#include "c.h"'. I have not
tested for that, though, and it'd be unlikely to apply for more
than a small number of them.
This effectively reverts commit 9edc97b71 (although the test is now
in a different place and has different contents). We don't need that
hack anymore, because since commit 4b93f5799, this test case no longer
throws an error and so there's no displayed CONTEXT that could vary
depending on CLOBBER_CACHE_ALWAYS. The underlying unstable-output
problem isn't really gone, of course, but it no longer manifests here.
The buildfarm's CLOBBER_CACHE_ALWAYS animals aren't happy with some
of the test cases added in commit 4b93f5799. There are two different
problems:
* In two places, a different CONTEXT stack is shown because the error
is detected in a different place, due to recompiling an expression
from scratch rather than re-using a previously cached plan for it.
I fixed these via the expedient of hiding the CONTEXT stack altogether.
* In one place, a test expected to fail (because a cached plan hadn't
been updated) actually succeeds (because the forced recompile makes
it good). I couldn't think of a simple workaround for this, so I've
just commented out that test step altogether.
I have hopes of improving things enough that both of these kluges can
be reverted eventually. The first one is the same kind of problem
previously discussed at
https://postgr.es/m/31545.1512924904@sss.pgh.pa.us
but there was insufficient agreement about how to fix it, so we
just hacked around the output instability (commit 9edc97b71).
The second issue should be fixed by allowing the plan to be rebuilt
when a type conflict is detected. But for today, let's just make the
buildfarm green again.
Some older compilers otherwise sometimes complain about undefined
values, even though the return value should not be used in the
overflow case. We assume that any decent compiler will optimize away
the unnecessary assignment in performance critical cases.
We do not want to restrain the returned value to a specific value,
e.g. 0 or the wrapped-around value, because some fast ways to
implement overflow detecting math do not easily allow for
that (e.g. msvc intrinsics). As the function documentation already
documents the returned value in case of intrinsics to be
implementation defined, no documentation has to be updated.
Per complaint from Tom Lane and his buildfarm member prairiedog.
Author: Andres Freund
Discussion: https://postgr.es/m/18169.1513958454@sss.pgh.pa.us
All of these are false positives, but in each case a fair amount of
analysis is needed to see that, and it's not too surprising that not all
compilers are smart enough. (In particular, in the logtape.c case, a
compiler lacking the knowledge provided by the Assert would almost surely
complain, so that this warning will be seen in any non-assert build.)
Some of these are of long standing while others are pretty recent,
but it only seems worth fixing them in HEAD.
Jaime Casanova, tweaked a bit by me
Discussion: https://postgr.es/m/CAJGNTeMcYAMJdPAom52dppLMtF-UnEZi0dooj==75OEv1EoBZA@mail.gmail.com
Per commit e748e902d, we appear to have little or no coverage in the
buildfarm of machines that will dump core when asked to printf a
null string pointer. Let's try to improve that situation by adding
an assertion that will make src/port/snprintf.c behave that way.
Since it's just an assertion, it won't break anything in production
builds, but it will help developers find this type of oversight.
Note that while our buildfarm coverage of machines that use that
snprintf implementation is pretty thin on the Unix side (apparently
amounting only to gaur/pademelon), all of the MSVC critters use it.
Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru
plpython_error_callback() reported the name of the function associated
with the topmost PL/Python execution context. This was not merely
wrong if there were nested PL/Python contexts, but it risked a core
dump if the topmost one is an inline code block rather than a named
function. That will have proname = NULL, and so we were passing a NULL
pointer to snprintf("%s"). It seems that none of the PL/Python-testing
machines in the buildfarm will dump core for that, but some platforms do,
as reported by Marina Polyakova.
Investigation finds that there actually is an existing regression test
that used to prove that the behavior was wrong, though apparently no one
had noticed that it was printing the wrong function name. It stopped
showing the problem in 9.6 when we adjusted psql to not print CONTEXT
by default for NOTICE messages. The problem is masked (if your platform
avoids the core dump) in error cases, because PL/Python will throw away
the originally generated error info in favor of a new traceback produced
at the outer level.
Repair by using ErrorContextCallback.arg to pass the correct context to
the error callback. Add a regression test illustrating correct behavior.
Back-patch to all supported branches, since they're all broken this way.
Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru
These features were never implemented previously for composite or record
variables ... not that the documentation admitted it, so there's no doc
updates here.
This also fixes some issues concerning enforcing DOMAIN NOT NULL
constraints against plpgsql variables, although I'm not sure that
that topic is completely dealt with.
I created a new plpgsql test file for these features, and moved the
one relevant existing test case into that file.
Tom Lane, reviewed by Daniel Gustafsson
Discussion: https://postgr.es/m/18362.1514605650@sss.pgh.pa.us
Over the years we've accreted quite a few special variables that are
predefined in plpgsql trigger functions. The cost of initializing these
variables to their defined values turns out to be a significant part of
the runtime of simple triggers; but, undoubtedly, most real-world triggers
never examine the values of most of these variables.
To improve matters, invent the notion of a variable that has a "promise"
attached to it, specifying which of the predetermined values should be
assigned to the variable if anything ever reads it. This eliminates all
the unneeded startup overhead, in return for a small penalty on accesses
to these variables.
Tom Lane, reviewed by Pavel Stehule
Discussion: https://postgr.es/m/11986.1514407114@sss.pgh.pa.us
Previously, copy_plpgsql_datum did a separate palloc for each variable
needing instance-local storage. In simple benchmarks this made for a
noticeable fraction of the total runtime. Improve it by precalculating
the space needed for all of a function's variables and doing just one
palloc for all of them.
In passing, remove PLPGSQL_DTYPE_EXPR from the list of plpgsql "datum"
types, since in fact it has nothing in common with the others, and there
is noplace that needs to discriminate on the basis of dtype between an
expression and any type of datum. And add comments clarifying which
datum struct fields are generic and which aren't.
Tom Lane, reviewed by Pavel Stehule
Discussion: https://postgr.es/m/11986.1514407114@sss.pgh.pa.us
Formerly, DTYPE_REC was used only for variables declared as "record";
variables of named composite types used DTYPE_ROW, which is faster for
some purposes but much less flexible. In particular, the ROW code paths
are entirely incapable of dealing with DDL-caused changes to the number
or data types of the columns of a row variable, once a particular plpgsql
function has been parsed for the first time in a session. And, since the
stored representation of a ROW isn't a tuple, there wasn't any easy way
to deal with variables of domain-over-composite types, since the domain
constraint checking code would expect the value to be checked to be a
tuple. A lesser, but still real, annoyance is that ROW format cannot
represent a true NULL composite value, only a row of per-field NULL
values, which is not exactly the same thing.
Hence, switch to using DTYPE_REC for all composite-typed variables,
whether "record", named composite type, or domain over named composite
type. DTYPE_ROW remains but is used only for its native purpose, to
represent a fixed-at-compile-time list of variables, for instance the
targets of an INTO clause.
To accomplish this without taking significant performance losses, introduce
infrastructure that allows storing composite-type variables as "expanded
objects", similar to the "expanded array" infrastructure introduced in
commit 1dc5ebc90. A composite variable's value is thereby kept (most of
the time) in the form of separate Datums, so that field accesses and
updates are not much more expensive than they were in the ROW format.
This holds the line, more or less, on performance of variables of named
composite types in field-access-intensive microbenchmarks, and makes
variables declared "record" perform much better than before in similar
tests. In addition, the logic involved with enforcing composite-domain
constraints against updates of individual fields is in the expanded
record infrastructure not plpgsql proper, so that it might be reusable
for other purposes.
In further support of this, introduce a typcache feature for assigning a
unique-within-process identifier to each distinct tuple descriptor of
interest; in particular, DDL alterations on composite types result in a new
identifier for that type. This allows very cheap detection of the need to
refresh tupdesc-dependent data. This improves on the "tupDescSeqNo" idea
I had in commit 687f096ea: that assigned identifying sequence numbers to
successive versions of individual composite types, but the numbers were not
unique across different types, nor was there support for assigning numbers
to registered record types.
In passing, allow plpgsql functions to accept as well as return type
"record". There was no good reason for the old restriction, and it
was out of step with most of the other PLs.
Tom Lane, reviewed by Pavel Stehule
Discussion: https://postgr.es/m/8962.1514399547@sss.pgh.pa.us
Instead of issuing a reload after pg_hba.conf changes between test
cases, run a full restart. With a reload, an error in the new
pg_hba.conf is ignored and the tests will continue to run with the old
settings, invalidating the subsequent test cases. With a restart, a
faulty pg_hba.conf will lead to the test being aborted, which is what
we'd rather want.
The modern way is to use a missing_ok argument instead of two separate
almost-identical routines, so do that.
Author: Michaël Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20180201063212.GE6398@paquier.xyz
The previous code failed to realize that this setting effectively
disables parallelism, and would crash if it decided to attempt
parallelism anyway. Instead, treat it as a disabling condition.
Kyotaro Horiguchi, who also reported the issue. Reviewed by Michael
Paquier and Peter Geoghegan.
Discussion: http://postgr.es/m/20180209.170635.256350357.horiguchi.kyotaro@lab.ntt.co.jp
pg_dump supposed that a stats object necessarily shares the same schema
as its underlying table, and that it doesn't have a separate owner.
These things may have been true during early development of the feature,
but they are not true as of v10 release.
Failure to track the object's schema separately turns out to have only
limited consequences, because pg_get_statisticsobjdef() always schema-
qualifies the target object name in the generated CREATE STATISTICS command
(a decision out of step with the rest of ruleutils.c, but I digress).
Therefore the restored object would be in the right schema, so that the
only problem is that the TOC entry would be mislabeled as to schema. That
could lead to wrong decisions for schema-selective restores, for example.
The ownership issue is a bit more serious: not only was the TOC entry
potentially mislabeled as to owner, but pg_dump didn't bother to issue an
ALTER OWNER command at all, so that after restore the stats object would
continue to be owned by the restoring superuser.
A final point is that decisions as to whether to dump a stats object or
not were driven by whether the underlying table was dumped or not. While
that's not wrong on its face, it won't scale nicely to the planned future
extension to cross-table statistics. Moreover, that design decision comes
out of the view of stats objects as being auxiliary to a particular table,
like a rule or trigger, which is exactly where the above problems came
from. Since we're now treating stats objects more like independent objects
in their own right, they ought to behave like standalone objects for this
purpose too. So change to using the generic selectDumpableObject() logic
for them (which presently amounts to "dump if containing schema is to be
dumped").
Along the way to fixing this, restructure so that getExtendedStatistics
collects the identity info (only) for all extended stats objects in one
query, and then for each object actually being dumped, we retrieve the
definition in dumpStatisticsExt. This is necessary to ensure that
schema-qualification in the generated CREATE STATISTICS command happens
with respect to the search path that pg_dump will now be using at restore
time (ie, the schema the stats object is in, not that of the underlying
table). It's probably also significantly faster in the typical scenario
where only a minority of tables have extended stats.
Back-patch to v10 where extended stats were introduced.
Discussion: https://postgr.es/m/18272.1518328606@sss.pgh.pa.us
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
CALL statements cannot support sub-SELECTs in the arguments of the called
procedure, since they just use ExecEvalExpr to evaluate such arguments.
Teach transformSubLink() to reject the case, as it already does for other
contexts in which subqueries are not supported.
In passing, s/EXPR_KIND_CALL/EXPR_KIND_CALL_ARGUMENT/ to make that enum
symbol line up more closely with the phrasing of the error messages it is
associated with. And fix someone's weak grasp of English grammar in the
preceding EXPR_KIND_PARTITION_EXPRESSION addition. Also update an
incorrect comment in resolve_unique_index_expr (possibly it was correct
when written, but nowadays transformExpr definitely does reject SRFs here).
Per report from Pavel Stehule --- but this resolves only one of the bugs
he mentions.
Discussion: https://postgr.es/m/CAFj8pRDxOwPPzpA8i+AQeDQFj7bhVw-dR2==rfWZ3zMGkm568Q@mail.gmail.com
Doing so causes EXPLAIN ANALYZE to show trigger statistics multiple
times. Commit 2f17844104 seems to
be to blame for this.
Amit Langote, revieed by Amit Khandekar, Etsuro Fujita, and me.
When the previously-chosen plan was non-partial, all pa_finished
flags for partial plans are now set, and pa_next_plan has not yet
been set to INVALID_SUBPLAN_INDEX, the previous code could go into
an infinite loop.
Report by Rajkumar Raghuwanshi. Patch by Amit Khandekar and me.
Review by Kyotaro Horiguchi.
Discussion: http://postgr.es/m/CAJ3gD9cf43z78qY=U=H0HvOEN341qfRO-vLpnKPSviHeWgJQ5w@mail.gmail.com
Instead of using the psql/libpq connection string as the displayed test
name and relying on "notes" and source code comments to explain the
tests, give the tests self-explanatory names, like we do elsewhere.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Add explicit collation on the trigger name to avoid locale dependencies.
Also restrict the tables selected, to avoid interference from
concurrently running tests.
This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING"
frame boundaries in window functions. We'd punted on that back in the
original patch to add window functions, because it was not clear how to
do it in a reasonably data-type-extensible fashion. That problem is
resolved here by adding the ability for btree operator classes to provide
an "in_range" support function that defines how to add or subtract the
RANGE offset value. Factoring it this way also allows the operator class
to avoid overflow problems near the ends of the datatype's range, if it
wishes to expend effort on that. (In the committed patch, the integer
opclasses handle that issue, but it did not seem worth the trouble to
avoid overflow failures for datetime types.)
The patch includes in_range support for the integer_ops opfamily
(int2/int4/int8) as well as the standard datetime types. Support for
other numeric types has been requested, but that seems like suitable
material for a follow-on patch.
In addition, the patch adds GROUPS mode which counts the offset in
ORDER-BY peer groups rather than rows, and it adds the frame_exclusion
options specified by SQL:2011. As far as I can see, we are now fully
up to spec on window framing options.
Existing behaviors remain unchanged, except that I changed the errcode
for a couple of existing error reports to meet the SQL spec's expectation
that negative "offset" values should be reported as SQLSTATE 22013.
Internally and in relevant parts of the documentation, we now consistently
use the terminology "offset PRECEDING/FOLLOWING" rather than "value
PRECEDING/FOLLOWING", since the term "value" is confusingly vague.
Oliver Ford, reviewed and whacked around some by me
Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com
LogicalTapeFreeze() may write out its first block when it is dirty but
not full, and then immediately read the first block back in from its
BufFile as a BLCKSZ-width block. This can only occur in rare cases
where very few tuples were written out, which is currently only
possible with parallel external tuplesorts. To avoid valgrind
complaints, tell it to treat the tail of logtape.c's buffer as
defined.
Commit 9da0cc3528 exposed this problem
but did not create it. LogicalTapeFreeze() has always tended to write
out some amount of garbage bytes, but previously never wrote less than
one block of data in total, so the problem was masked.
Per buildfarm members lousyjack and skink.
Peter Geoghegan, based on a suggestion from Tom Lane and me. Some
comment revisions by me.
Up to now, useful info for writing a new btree opclass has been buried
in the backend's nbtree/README file. Let's move it into the SGML docs,
in preparation for extending it with info about "in_range" functions
in the upcoming window RANGE patch.
To do this, I chose to create a new chapter for btree indexes in Part VII
(Internals), parallel to the chapters that exist for the newer index AMs.
This is a pretty short chapter as-is. At some point somebody might care
to flesh it out with more detail about btree internals, but that is
beyond the scope of my ambition for today.
Discussion: https://postgr.es/m/23141.1517874668@sss.pgh.pa.us
The previous code assumed that we'd always succeed in creating
child-joins for a joinrel for which partition-wise join was considered,
but that's not guaranteed, at least in the case where dummy rels
are involved.
Ashutosh Bapat, with some wordsmithing by me.
Discussion: http://postgr.es/m/CAFjFpRf8=uyMYYfeTBjWDMs1tR5t--FgOe2vKZPULxxdYQ4RNw@mail.gmail.com
pg_upgrade has always attempted to ensure that the transient dump files
it creates are inaccessible except to the owner. However, refactoring
in commit 76a7650c4 broke that for the file containing "pg_dumpall -g"
output; since then, that file was protected according to the process's
default umask. Since that file may contain role passwords (hopefully
encrypted, but passwords nonetheless), this is a particularly unfortunate
oversight. Prudent users of pg_upgrade on multiuser systems would
probably run it under a umask tight enough that the issue is moot, but
perhaps some users are depending only on pg_upgrade's umask changes to
protect their data.
To fix this in a future-proof way, let's just tighten the umask at
process start. There are no files pg_upgrade needs to write at a
weaker security level; and if there were, transiently relaxing the
umask around where they're created would be a safer approach.
Report and patch by Tom Lane; the idea for the fix is due to Noah Misch.
Back-patch to all supported branches.
Security: CVE-2018-1053
Failure to advance the list pointer while reading partition expressions
from a list results in invoking an input function with inappropriate data,
possibly leading to crashes or, with carefully crafted input, disclosure
of arbitrary backend memory.
Bug discovered independently by Álvaro Herrera and David Rowley.
This patch is by Álvaro but owes something to David's proposed fix.
Back-patch to v10 where the issue was introduced.
Security: CVE-2018-1052
We don't need to set up the shared space for hash join instrumentation data
if instrumentation hasn't been requested. Let's follow the example of the
similar Sort node code and save a few cycles by skipping that when we can.
This reverts commit d59ff4ab3 and instead allows us to use the safer choice
of passing noError = false to shm_toc_lookup in ExecHashInitializeWorker,
since if we reach that call there should be a TOC entry to be found.
Thomas Munro
Discussion: https://postgr.es/m/E1ehkoZ-0005uW-43%40gemulon.postgresql.org
One or another author of commit 5bcf389ec seems to have thought that
computing an offset from a NULL pointer would yield another NULL pointer.
There may possibly be architectures where that works, but common machines
don't work like that. Per a quick code review of places calling
shm_toc_lookup and not using noError = false.
Commit 445dbd82a basically missed the point of commit d46633506,
which was that we shouldn't allow shm_toc_lookup() failure to lead
to a core dump or assertion crash, because the odds of such a
failure should never be considered negligible. It's correct that
we can't expect the PARALLEL_KEY_ERROR_QUEUE TOC entry to be there
if we have no workers. But if we have no workers, we're not going
to do anything in this function with the lookup result anyway,
so let's just skip it. That lets the code use the easy-to-prove-safe
noError=false case, rather than anything requiring effort to review.
Back-patch to v10, like the previous commit.
Discussion: https://postgr.es/m/3647.1517601675@sss.pgh.pa.us
Investigation of 2d2d06b7e2 revealed that
identity values were not applied in some further cases, including
logical replication subscribers, VALUES RTEs, and ALTER TABLE ... ADD
COLUMN. To fix all that, apply the identity column expression in
build_column_default() instead of repeating the same logic at each call
site.
For ALTER TABLE ... ADD COLUMN ... IDENTITY, the previous coding
completely ignored that existing rows for the new column should have
values filled in from the identity sequence. The coding using
build_column_default() fails for this because the sequence ownership
isn't registered until after ALTER TABLE, and we can't do it before
because we don't have the column in the catalog yet. So we specially
remember in ColumnDef the sequence name that we decided on and build a
custom NextValueExpr using that.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
To make this work, tuplesort.c and logtape.c must also support
parallelism, so this patch adds that infrastructure and then applies
it to the particular case of parallel btree index builds. Testing
to date shows that this can often be 2-3x faster than a serial
index build.
The model for deciding how many workers to use is fairly primitive
at present, but it's better than not having the feature. We can
refine it as we get more experience.
Peter Geoghegan with some help from Rushabh Lathia. While Heikki
Linnakangas is not an author of this patch, he wrote other patches
without which this feature would not have been possible, and
therefore the release notes should possibly credit him as an author
of this feature. Reviewed by Claudio Freire, Heikki Linnakangas,
Thomas Munro, Tels, Amit Kapila, me.
Discussion: http://postgr.es/m/CAM3SWZQKM=Pzc=CAHzRixKjp2eO5Q0Jg1SoFQqeXFQ647JiwqQ@mail.gmail.com
Discussion: http://postgr.es/m/CAH2-Wz=AxWqDoVvGU7dq856S4r6sJAj6DBn7VMtigkB33N5eyg@mail.gmail.com
Remove partition_bound_cmp() and partition_bound_bsearch(), whose
void * argument could be, depending on the situation, of any of
three different types: PartitionBoundSpec *, PartitionRangeBound *,
Datum *.
Instead, introduce separate bound-searching functions for each
situation: partition_list_bsearch, partition_range_bsearch,
partition_range_datum_bsearch, and partition_hash_bsearch. This
requires duplicating the code for binary search, but it makes the
code much more type safe, involves fewer branches at runtime, and
at least in my opinion, is much easier to understand.
Along the way, add an option to partition_range_datum_bsearch
allowing the number of keys to be specified, so that we can search
for partitions based on a prefix of the full list of partition
keys. This is important for pending work to improve partition
pruning.
Amit Langote, per a suggestion from me.
Discussion: http://postgr.es/m/CA+TgmoaVLDLc8=YESRwD32gPhodU_ELmXyKs77gveiYp+JE4vQ@mail.gmail.com
Once this function has been called, we know that all workers have
started and attached to their error queues -- so if any of them
subsequently exit uncleanly, we'll be sure to throw an ERROR promptly.
Otherwise, users of the ParallelContext machinery must be careful not
to wait forever for a worker that has failed to start. Parallel query
manages to work without needing this for reasons explained in new
comments added by this patch, but it's a useful primitive for other
parallel operations, such as the pending patch to make creating a
btree index run in parallel.
Amit Kapila, revised by me. Additional review by Peter Geoghegan.
Discussion: http://postgr.es/m/CAA4eK1+e2MzyouF5bg=OtyhDSX+=Ao=3htN=T-r_6s3gCtKFiw@mail.gmail.com
Issuing 'quit'/'exit' in an empty psql buffer exits psql. Issuing
'quit'/'exit' in a non-empty psql buffer alone on a line with no prefix
whitespace issues a hint on how to exit.
Also add similar 'help' hints for 'help' in a non-empty psql buffer.
Reported-by: Everaldo Canuto
Discussion: https://postgr.es/m/flat/CALVFHFb-C_5_94hueWg6Dd0zu7TfbpT7hzsh9Zf0DEDOSaAnfA%40mail.gmail.com
Author: original author Robert Haas, modified by me
The old code generated always generated a constraint of the form
col = ANY(ARRAY[val1, val2, ...]), but that's invalid when col is an
array type. Instead, generate col = val when there's only one value,
col = val1 OR col = val2 OR ... when there are multiple values and
col is of array type, and the old form when there are multiple values
and col is not of an array type.
As a side benefit, this makes constraint exclusion able to prune
a list partition declared to accept a single Boolean value, which
didn't work before.
Amit Langote, reviewed by Etsuro Fujita
Discussion: http://postgr.es/m/97267195-e235-89d1-a41a-c110198dfce9@lab.ntt.co.jp
Separate the parts specific to the SSL library from the general logic.
The previous code structure was
open_client_SSL()
calls verify_peer_name_matches_certificate()
calls verify_peer_name_matches_certificate_name()
calls wildcard_certificate_match()
and was completely in fe-secure-openssl.c. The new structure is
open_client_SSL() [openssl]
calls pq_verify_peer_name_matches_certificate() [generic]
calls pgtls_verify_peer_name_matches_certificate_guts() [openssl]
calls openssl_verify_peer_name_matches_certificate_name() [openssl]
calls pq_verify_peer_name_matches_certificate_name() [generic]
calls wildcard_certificate_match() [generic]
Move the generic functions into a new file fe-secure-common.c, so the
calls generally go fe-connect.c -> fe-secure.c -> fe-secure-${impl}.c ->
fe-secure-common.c, although there is a bit of back-and-forth between
the last two.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
pg_hba_file_rules erroneously reported this as scram-sha256. Fix that.
To avoid future errors and confusion, also adjust documentation links
and internal symbols to have a separator between "sha" and "256".
Reported-by: Christophe Courtois <christophe.courtois@dalibo.com>
Author: Michael Paquier <michael.paquier@gmail.com>
It's a common task to evaluate a qual and reset the corresponding
expression context. Currently that requires storing the result of the
qual eval, resetting the context, and then reacting on the result. As
that's awkward several places only reset the context next time through
a node. That's not great, so introduce a helper that evaluates and
resets.
It's a bit ugly that it currently uses MemoryContextReset() instead of
ResetExprContext(), but that seems easier than reordering all of
executor.h.
Author: Andres Freund
Discussion: https://postgr.es/m/20180109222544.f7loxrunqh3xjl5f@alap3.anarazel.de
There's never any value in giving a fully specified cache key to
SearchCatCacheList: you might as well call SearchCatCache instead,
since there could be only one match. So the maximum useful number of
key arguments is one less than the supported number of key columns.
We might as well remove the useless extra argument and save some few
bytes per call site, as well as a cycle or so per call.
I believe the reason it was coded like this is that originally, callers
had to write out all the dummy arguments in each call, and so it seemed
less confusing if SearchCatCache and SearchCatCacheList took the same
number of key arguments. But since commit e26c539e9, callers only write
their live arguments explicitly, making that a non-factor; and there's
surely been enough time for third-party modules to adapt to that coding
style. So this is only an ABI break not an API break for callers.
Per discussion with Oliver Ford, this might also make it less confusing
how to use SearchCatCacheList correctly.
Discussion: https://postgr.es/m/27788.1517069693@sss.pgh.pa.us
ExecPushExprSlots didn't initialize ExprEvalStep's resvalue/resnull
steps as it didn't use them. That caused wrong valgrind warnings for
an upcoming patch, so zero-intialize.
Also zero-initialize all scratch ExprEvalStep's allocated on the
stack, to avoid issues with similar future omissions of non-critial
data.
In cases where simplehash tables where filled with either a lot of
conflicting hash-values, or values that hash to consecutive
values (i.e. build "chains") the growth heuristics in
d4c62a6b62 could trigger rather
explosively.
To fix that, address some of the reasons (see previous commit) of why
the growth heuristics where needed, and only allow growth when the
table isn't too empty. While that means there's a few cases of bad
input that can be slower, that seems a lot better than running very
quickly out of memory.
Author: Tomas Vondra and Andres Freund, with additional input by
Thomas Munro, Tom Lane Todd A. Cook
Reported-By: Todd A. Cook, Tomas Vondra, Thomas Munro
Discussion: https://postgr.es/m/20171127185700.1470.20362@wrigleys.postgresql.org
Backpatch: 10, where simplehash was introduced
The changes in b81b5a96f4 did not fully
address the issue, because the bit-mixing of the IV into the final
hash-key didn't prevent clustering in the input-data survive in the
output data.
This didn't cause a lot of problems because of the additional growth
conditions added d4c62a6b62. But as we
want to rein those in due to explosive growth in some edges, this
needs to be fixed.
Author: Andres Freund
Discussion: https://postgr.es/m/20171127185700.1470.20362@wrigleys.postgresql.org
Backpatch: 10, where simplehash was introduced
When a password is needed, cases such as
psql -d "postgresql://alice@localhost/testdb" -U bob
would incorrectly prompt for "Password for user bob: ", when actually the
connection will be attempted with username alice. The priority order of
which name to use isn't that important here, but the misleading prompt is.
When we are prompting for a password after initial connection failure,
we can fix this reliably by looking at PQuser(conn) to see how libpq
interpreted the connection arguments. But when we're doing a forced
password prompt because of a -W switch, we can't use that solution.
Fortunately, because the main use of -W is for noninteractive situations,
it's less critical to produce a helpful prompt in such cases. I made
the startup prompt for -W just say "Password: " all the time, rather
than expending extra code on trying to identify which username to use.
In the case of a \c command (after -W has been given), there's already
logic in do_connect that determines whether the "dbname" is a connstring
or URI, so we can avoid lobotomizing the prompt except in cases that
are actually dubious. (We could do similarly in startup.c if anyone
complains, but for now it seems not worthwhile, especially since that
would still be only a partial solution.)
Per bug #15025 from Akos Vandra. Although this is arguably a bug fix,
it doesn't seem worth back-patching. The case where it matters seems
like a very corner-case usage, and someone might complain that we'd
changed the behavior of -W in a minor release.
Discussion: https://postgr.es/m/20180123130013.7407.24749@wrigleys.postgresql.org
create_plan_recurse lacked any stack depth check. This is not per
our normal coding rules, but I'd supposed it was safe because earlier
planner processing is more complex and presumably should eat more
stack. But bug #15033 from Andrew Grossman shows this isn't true,
at least not for queries having the form of a many-thousand-way
INTERSECT stack.
Further testing showed that recurse_set_operations is also capable
of being crashed in this way, since it likewise will recurse to the
bottom of a parsetree before calling any support functions that
might themselves contain any stack checks. However, its stack
consumption is only perhaps a third of create_plan_recurse's.
It's possible that this particular problem with create_plan_recurse can
only manifest in 9.6 and later, since before that we didn't build a Path
tree for set operations. But having seen this example, I now have no
faith in the proposition that create_plan_recurse doesn't need a stack
check, so back-patch to all supported branches.
Discussion: https://postgr.es/m/20180127050845.28812.58244@wrigleys.postgresql.org
DST law changes in Brazil, Sao Tome and Principe. Historical corrections
for Bolivia, Japan, and South Sudan. The "US/Pacific-New" zone has been
removed (it was only a link to America/Los_Angeles anyway).
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
This is preparatory refactoring to prepare the way for partition-wise
aggregate, which will reuse the new subroutines for child grouping
rels. It also does not seem like a bad idea on general principle,
as the function was getting pretty long.
Jeevan Chalke. The larger patch series of which this patch is a part
was reviewed and tested by Antonin Houska, Rajkumar Raghuwanshi,
Ashutosh Bapat, David Rowley, Dilip Kumar, Konstantin Knizhnik,
Pascal Legrand, and me. Some cosmetic changes by me.
Discussion: http://postgr.es/m/CAM2+6=V64_xhstVHie0Rz=KPEQnLJMZt_e314P0jaT_oJ9MR8A@mail.gmail.com
This clause was superseded by SQL-standard syntax back in 7.3.
We've kept it around for backwards-compatibility purposes ever since;
but 15 years seems like long enough for that, especially seeing that
there are undocumented weirdnesses in how it interacts with the
SQL-standard syntax for specifying the same options.
Michael Paquier, per an observation by Daniel Gustafsson;
some small cosmetic adjustments to nearby code by me.
Discussion: https://postgr.es/m/20180115022748.GB1724@paquier.xyz
The existing "connection authorized" server log messages used OpenSSL
API calls directly, even though similar abstracted API calls exist.
Change to use the latter instead.
Change the function prototype for the functions that return the TLS
version and the cipher to return const char * directly instead of
copying into a buffer. That makes them slightly easier to use.
Add bits= to the message. psql shows that, so we might as well show the
same information on the client and server.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
As the comment there stated, these were needed for old-style
user-defined functions, but since we removed support for those, we don't
need this anymore.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
We have switches already to suppress other subsidiary object properties,
such as ACLs, security labels, ownership, and tablespaces, so just on
the grounds of symmetry we should allow suppressing comments as well.
Also, commit 0d4e6ed30 added a positive reason to have this feature,
i.e. to allow obtaining the old behavior of selective pg_restore should
anyone desire that.
Recent commits have removed the cases where pg_dump emitted comments on
built-in objects that the restoring user might not have privileges to
comment on, so the original primary motivation for this feature is gone,
but it still seems at least somewhat useful in its own right.
Robins Tharakan, reviewed by Fabrízio Mello
Discussion: https://postgr.es/m/CAEP4nAx22Z4ch74oJGzr5RyyjcyUSbpiFLyeYXX8pehfou92ug@mail.gmail.com
Ensure that CREATE DATABASE and related commands are issued when, and
only when, --create is specified. Previously there were scenarios
where using selective-dump switches would prevent --create from having
any effect. For example, it would fail to do anything in pg_restore
if the archive file had been made by a selective dump, because there
would be no TOC entry for the database.
Since we don't issue \connect either if we don't issue CREATE DATABASE,
this could result in unexpectedly restoring objects into the wrong
database.
Also fix pg_restore's selective restore logic so that when an object is
selected to be restored, we also restore its ACL, comment, and security
label if any. Previously there was no way to get the latter properties
except through tedious mucking about with a -L file. If, for some
reason, you don't want these properties, you can match the old behavior
by adding --no-acl etc.
While at it, try to make _tocEntryRequired() a little better organized
and better documented.
Discussion: https://postgr.es/m/32668.1516848577@sss.pgh.pa.us
get_relation_info() was too optimistic about opening indexes in
partitioned tables, which would raise errors when any queries were
planned on such tables. Fix by ignoring any indexes of the partitioned
kind.
CLUSTER (and ALTER TABLE CLUSTER ON) had a similar problem. Fix by
disallowing these commands in partitioned tables.
Fallout from 8b08f7d482.
We had some pretty ad-hoc handling of the public schema and the plpgsql
extension, which are both presumed to exist in template0 but might be
modified or deleted in the database being dumped.
Up to now, by default pg_dump would emit a CREATE EXTENSION IF NOT EXISTS
command as well as a COMMENT command for plpgsql. The usefulness of the
former is questionable, and the latter caused annoying errors in
non-superuser dump/restore scenarios. Let's instead install a rule that
built-in extensions (identified by having low-numbered OIDs) are not to be
dumped. We were doing it that way already in binary-upgrade mode, so this
just makes regular mode behave the same. It remains true that if someone
has installed a non-default ACL on the plpgsql language, that will get
dumped thanks to the pg_init_privs mechanism. This is more consistent with
the handling of built-in objects of other kinds.
Also, change the very ad-hoc mechanism that was used to avoid dumping
creation and comment commands for the public schema. Instead of hardwiring
a test in _printTocEntry(), make use of the DUMP_COMPONENT_ infrastructure
to mark that schema up-front about what we want to do with it. This has
the visible effect that the public schema won't be mentioned in the output
at all, except for updating its ACL if it has a non-default ACL.
Previously, while it was normally not mentioned, --clean mode would drop
and recreate it, again causing headaches for non-superuser usage. This
change likewise makes the public schema less special and more like other
built-in objects.
If plpgsql, or the public schema, has been removed entirely in the source
DB, that situation won't be reproduced in the destination ... but that
was true before.
Discussion: https://postgr.es/m/29048.1516812451@sss.pgh.pa.us
These were introduced in 4cbb646334, but
after further analysis and testing, they should not be necessary and
probably weren't the part of that commit that fixed anything.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Connection strings can have items with spaces in them, wrapped in
quotes. The tests however ran a SELECT '$connstr' upon connection which
broke on the embedded quotes. Use dollar quotes on the connstr to
protect against this. This was hit during the development of the macOS
Secure Transport patch, but is independent of it.
Author: Daniel Gustafsson <daniel@yesql.se>
record_image_eq was covered a bit by the materialized view code that it
is meant to support, but record_image_cmp was not tested at all.
While we're here, add more tests to record_eq and record_cmp as well,
for symmetry.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Avoid compiler warnings on MSVC (which doesn't want to see both
__forceinline and inline) and ancient GCC (which doesn't have
__attribute__((always_inline))).
Don't force inline-ing when building at -O0, as the programmer is probably
hoping for exact source-to-object-line correspondence in that case.
(For the moment this only works for GCC; maybe we can extend it later.)
Make pg_attribute_always_inline be syntactically a drop-in replacement
for inline, rather than an additional wart.
And improve the comments.
Thomas Munro and Michail Nikolaev, small tweaks by me
Discussion: https://postgr.es/m/32278.1514863068@sss.pgh.pa.us
Discussion: https://postgr.es/m/CANtu0oiYp74brgntKOxgg1FK5+t8uQ05guSiFU6FYz_5KUhr6Q@mail.gmail.com
If we're inside a lateral subquery, there may be no unparameterized paths
for a particular child relation of an appendrel, in which case we *must*
be able to create similarly-parameterized paths for each other child
relation, else the planner will fail with "could not devise a query plan
for the given query". This means that there are situations where we'd
better be able to reparameterize at least one path for each child.
This calls into question the assumption in reparameterize_path() that
it can just punt if it feels like it. However, the only case that is
known broken right now is where the child is itself an appendrel so that
all its paths are AppendPaths. (I think possibly I disregarded that in
the original coding on the theory that nested appendrels would get folded
together --- but that only happens *after* reparameterize_path(), so it's
not excused from handling a child AppendPath.) Given that this code's been
like this since 9.3 when LATERAL was introduced, it seems likely we'd have
heard of other cases by now if there were a larger problem.
Per report from Elvis Pranskevichus. Back-patch to 9.3.
Discussion: https://postgr.es/m/5981018.zdth1YWmNy@hammer.magicstack.net