pg_stat_get_subscription scanned one more LogicalRepWorker array entry
than is really allocated. In the worst case this could lead to SIGSEGV,
if the LogicalRepCtx data structure is near the end of shared memory.
That seems quite unlikely though (thanks to the ordering of calls in
CreateSharedMemoryAndSemaphores) and we've heard no field reports of it.
A more likely misbehavior is one row of garbage data in the function's
result, but even that is not real likely because of the check that the
pid field matches some live backend.
Report and fix by Kuntal Ghosh. This bug is old, so back-patch
to all supported branches.
Discussion: https://postgr.es/m/CAGz5QCJykEDzW6jQK6Yz7Qh_PMtD=95de_7QoocbVR2Qy8hWZA@mail.gmail.com
An error PGresult generated by libpq itself, such as a report of
connection loss, won't have broken-down error fields.
should_processing_continue() blithely assumed that
PG_DIAG_SEVERITY_NONLOCALIZED would always be present, and would
dump core if it wasn't.
Per grepping to see if 6d157e7cb's mistake was repeated elsewhere.
An error PGresult generated by libpq itself, such as a report of
connection loss, won't have broken-down error fields.
ecpg_raise_backend() blithely assumed that PG_DIAG_MESSAGE_PRIMARY
would always be present, and would end up passing a NULL string
pointer to snprintf when it isn't. That would typically crash
before 3779ac62d, and it would fail to provide a useful error report
in any case. Best practice is to substitute PQerrorMessage(conn)
in such cases, so do that.
Per bug #17421 from Masayuki Hirose. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/17421-790ff887e3188874@postgresql.org
psql --single-transaction is able to handle multiple -c and -f switches
in a single transaction since d5563d7d, but this had the surprising
behavior of forcing a transaction COMMIT even if psql failed with an
error in the client (for example incorrect path given to \copy), which
would generate an error, but still commit any changes that were already
applied in the backend. This commit makes the behavior more consistent,
by enforcing a transaction ROLLBACK if any commands fail, both
client-side and backend-side, so as no changes are applied if one error
happens in any of them.
Some tests are added on HEAD to provide some coverage about all that.
Backend-side errors are unreliable as IPC::Run can complain on SIGPIPE
if psql quits before reading a query result, but that should work
properly in the case where any errors come from psql itself, which is
what the original report is about.
Reported-by: Christoph Berg
Author: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/17504-76b68018e130415e@postgresql.org
Backpatch-through: 10
The hard-wired PageOutput arguments in usage() and sibling functions
have been a perennial maintenance gotcha, and there's no reason to
think we'll ever get any better about that. Let's get rid of those
magic constants by constructing the output in a buffer where we can
count the newlines before calling PageOutput. (Perhaps this is
microscopically slower; but none of these functions are performance
critical, and anyway we might well be buying back all the cost by
avoiding having to pass most of the data through snprintf.c. I could
not detect any speed difference in a desultory check.) This also
gets rid of the need to assume that platform-specific variations in
the output are insignificant.
While at it, make the code shorter and more abstract by inventing
helper macros HELP0() and HELPN() to encapsulate the specific
output actions being invoked.
Discussion: https://postgr.es/m/365160.1654289490@sss.pgh.pa.us
TAP tests are run from their own directory in the source tree, and in a
VPATH build the execution of the pg_upgrade command was leaving behind a
file in the source tree, that should be left untouched. In order to
avoid this issue, the test moves to PostgreSQL::Test::Utils::tmp_check,
so as any files generated by pg_upgrade do not impact the source tree,
but the build tree. This has as nice side-effect to make unnessary the
presence of such files in pg_upgrade's .gitignore and Makefile. This
strategy is similar to psql's test 010_tab_completion.pl, though the
reasons behind this choice are different.
In passing, fix one misleading test name that was added by 99f6f19.
Per discussion with Peter Eisentraut, Andrew Dunstan, Tom Lane, Andres
Freund and myself.
Discussion: https://postgr.es/m/f80ace33-11fb-1cd3-20f8-98f51d151088@enterprisedb.com
Provide a gloss of which command does what, as all other backslash
commands have. Put the large-object command section into a more
considered spot in the list.
In passing, update the output-lines count in helpVariables()
(oversight in 7844c9918, looks like).
Thibaud Walkowiak, reviewed by Nathan Bossart and myself
Discussion: https://postgr.es/m/43f0439c-df3e-a045-ac99-af33523cc2d4@dalibo.com
The previous entry invited confusion between what uniq() does
by itself and what it does when combined with sort(). The latter
usage is pretty useful so we should show it, but add an additional
example to clarify the results of uniq() alone.
Per suggestion from Martin Kalcher. Back-patch to v13, where
we switched to formatting that supports multiple examples.
Discussion: https://postgr.es/m/165407884456.573551.8779012279828726162@wrigleys.postgresql.org
The regression tests of libpq have been converted to a TAP test as of
ac25173, but they were not run when running the global make targets
mentioned in $subject as the code only considered src/interfaces/ecpg/
as a path to run. This switches to src/interfaces/, to always include
the tests of libpq in each run.
Reported-by: Justin Pryzby
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/20220529151850.GN19626@telsasoft.com
Currently, we simply combine the column lists when publishing tables on
multiple publications and that can sometimes lead to unexpected behavior.
Say, if a column is published in any row-filtered publication, then the
values for that column are sent to the subscriber even for rows that don't
match the row filter, as long as the row matches the row filter for any
other publication, even if that other publication doesn't include the
column.
The main purpose of introducing a column list is to have statically
different shapes on publisher and subscriber or hide sensitive column
data. In both cases, it doesn't seem to make sense to combine column
lists.
So, we disallow the cases where the column list is different for the same
table when combining publications. It can be later extended to combine the
column lists for selective cases where required.
Reported-by: Alvaro Herrera
Author: Hou Zhijie
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/202204251548.mudq7jbqnh7r@alvherre.pgsql
Since a117cebd6, some older gcc versions issue "variable may be used
uninitialized in this function" complaints for brin_summarize_range.
Silence that using the same coding pattern as in bt_index_check_internal;
arguably, a117cebd6 had too narrow a view of which compilers might give
trouble.
Nathan Bossart and Tom Lane. Back-patch as the previous commit was.
Discussion: https://postgr.es/m/20220601163537.GA2331988@nathanxps13
Perl 5.36 has reclassified the warning condition that this test
case used, so that the expected error fails to appear. Tweak
the test so it instead exercises a case that's handled the same
way in all Perl versions of interest.
This appears to meet our standards for back-patching into
out-of-support branches: it changes no user-visible behavior
but enables testing of old branches with newer tools.
Hence, back-patch as far as 9.2.
Dagfinn Ilmari Mannsåker, per report from Jitka Plesníková.
Discussion: https://postgr.es/m/564579.1654093326@sss.pgh.pa.us
The example given for anyelement <@ anymultirange didn't return
true as claimed; adjust it so it does.
In passing, change a couple of sample results where the modern
numeric-based logic produces a different number of trailing zeroes
than before.
Erik Rijkers
Discussion: https://postgr.es/m/cc35735d-1ec1-5bb3-9e27-cddbab7afa23@xs4all.nl
The PostgreSQL limitations section of the documents mentioned the limit
on the number of columns that can exist in a table. Users might be
surprised to find that there's also a limit on the number of columns that
can exist in a targetlist. Users may experience restrictions which
surprise them if they happened to select a large number of columns from
several tables with many columns. Here we document that there is a
limitation on this and mention what that limit actually is.
Wording proposal by Alvaro Herrera
Reported-by: Vladimir Sitnikov
Author: Dave Crammer
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAB=Je-E18aTYpNqje4mT0iEADpeGLSzwUvo3H9kRRuDdsNo4aQ@mail.gmail.com
Backpatch-through: 12, where the limitations section was added
This reverts commit d9d076222f "VACUUM: ignore indexing operations
with CONCURRENTLY".
These changes caused indexes created with the CONCURRENTLY option to
miss heap tuples that were HOT-updated and HOT-pruned during the index
creation. Before these changes, HOT pruning would have been prevented
by the Xmin of the transaction creating the index, but because this
change was precisely to allow the Xmin to move forward ignoring that
backend, now other backends scanning the table can prune them. This is
not a problem for VACUUM (which requires a lock that conflicts with a
CREATE INDEX CONCURRENTLY operation), but HOT-prune can definitely
occur. In other words, Xmin advancement was sped up, but at the cost of
corrupting the resulting index.
Regrettably, this means that the new feature in PG14 that RIC/CIC on
very large tables no longer force VACUUM to retain very old tuples goes
away. We might try to implement it again in a later release, but for
now the risk of indexes missing tuples is too high and there's no easy
fix.
Backpatch to 14, where this change appeared.
Reported-by: Peter Slavov <pet.slavov@gmail.com>
Diagnosys-by: Andrey Borodin <x4mmm@yandex-team.ru>
Diagnosys-by: Michael Paquier <michael@paquier.xyz>
Diagnosys-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/17485-396609c6925b982d%40postgresql.org
We hadn't noticed this because (a) few people feed invalid
timezone abbreviation files to the server, and (b) in typical
scenarios guc.c would throw ereport(ERROR) and then transaction
abort handling would silently clean up the leaked file reference.
However, it was possible to observe file leakage warnings if one
breaks an already-active abbreviation file, because guc.c does
not throw ERROR when loading supposedly-validated settings during
session start or SIGHUP processing.
Report and fix by Kyotaro Horiguchi (cosmetic adjustments by me)
Discussion: https://postgr.es/m/20220530.173740.748502979257582392.horikyota.ntt@gmail.com
runtime.sgml contains a passing reference to the minimum server
version that pg_dump[all] can dump from. That was 7.0 for many
years, but when 64f3524e2 raised it to 8.0, we missed updating this
bit. Then when 30e7c175b raised it to 9.2, we missed it again.
Given that track record, I'm not too hopeful that we'll remember
to fix this in future changes ... but for now, make the docs match
reality in each branch.
Noted by Daniel Westermann.
Discussion: https://postgr.es/m/GV0P278MB041917EB3E2FE8704B5AE2C6D2DC9@GV0P278MB0419.CHEP278.PROD.OUTLOOK.COM
With the old logic, when the reciever had not yet attached, we would
never call shm_mq_inc_bytes_written(), even if force_flush = true
was specified. That could result in a situation where data that the
sender believes it has sent is never received.
Along the way, remove a useless function prototype for a nonexistent
function from shm_mq.h.
Commit 46846433a0 introduced these
problems.
Pavan Deolasee, with a few changes by me.
Discussion: https://postgr.es/m/CABOikdPkwtLLCTnzzmpSMXo3QZa2yXq0J7Q61ssdLFAJYrOVvQ@mail.gmail.com
foreign_data has kept around a set of tests for TRUNCATE to look after
the case of foreign tables, with[out] inheritance and with[out]
partitions, assuming that the command is not supported for this relkind.
However, TRUNCATE is supported on foreign tables if the FDW involved is
able to handle the command, like postgres_fdw.
Note that postgres_fdw includes tests to cover all the cases removed by
this commit (which had misleading comments), so these did not provide
any additional coverage anyway.
Author: Yugo Nagata
Discussion: https://postgr.es/m/20220527172543.0a2fdb469cf048b81c0967d3@sraoss.co.jp
Commit 1a36bc9db (SQL/JSON query functions) introduced STRING as a
type_func_name_keyword, thereby breaking applications that use
"string" as a table name, column name, function parameter name, etc.
That seems like a pretty bad thing, not least because the SQL spec
says that STRING is an unreserved keyword.
This is easy enough to fix so far as the core grammar is concerned.
However, doing so causes some ECPG test cases to fail, specifically
those that use "string" as a typedef name. It turns out this is
because portions of the ECPG grammar allow type_func_name_keywords
but not unreserved_keywords as typedef names. That's pretty horrid,
and it's mildly astonishing that we've not heard complaints about it
before. We can fix two of those uses trivially, but the ones in the
var_type production are less easy. As a stopgap, hard-code STRING as
an allowed alternative in var_type.
Per report from Alastair McKinley.
Discussion: https://postgr.es/m/3661437.1653855582@sss.pgh.pa.us
The information generated when track_activities is accessible to
superusers, roles with the privileges of pg_read_all_stats, as well as
roles one has the privileges of. The original text did not outline the
last point, while the change done in ac1ae47 was unclear about the
second point.
Per discussion with Nathan Bossart.
Discussion: https://postgr.es/m/20220521185743.GA886636@nathanxps13
Backpatch-through: 10
In the codepath when no encoding conversion is required, the check for
incomplete character at the end of input incorrectly used server
encoding's max character length, instead of the client's. Usually the
server and client encodings are the same when we're not performing
encoding conversion, but SQL_ASCII is an exception.
In the passing, also fix some outdated comments that still talked about
the old COPY protocol. It was removed in v14.
Per bug #17501 from Vitaly Voronov. Backpatch to v14 where this was
introduced.
Discussion: https://www.postgresql.org/message-id/17501-128b1dd039362ae6@postgresql.org
The form taking anymultirange had not been documented. This was
fixed in HEAD in b21c4cf95, but that should have been back-patched
to v14 since the function was added there. Do so now.
Also, the form taking anyrange was incorrectly documented as
returning anymultirange, when it returns anyrange.
Remove b21c4cf95 from the v15 release notes, since it no longer
qualifies as new-in-v15.
Noted by Shay Rojansky.
Discussion: https://postgr.es/m/CADT4RqAktzP7t6SFf0Xqm9YhahzvsmxFbzXe-gFOd=+_CHm0JA@mail.gmail.com
This reverts commit 06f5295 as per issues with this approach, both in
terms of efficiency impact and stability. First, contrary to the
single-item cache for transaction IDs in transam.c, the cache may finish
by not be hit for a long time, and without an invalidation mechanism to
clear it, it would cause inconsistent results on wraparound for
example. Second, the use of SubTransGetTopmostTransaction() for the
caching has a limited impact on performance. SubTransGetParent() could
have more impact, though the benchmarking of the single-item approach
still needs to be proved, particularly under the conditions where SLRU
lookups are stressed in parallel with overflowed snapshots (aka more
than 64 subxids generated, for example).
After discussion with Andres Freund.
Discussion: https://postgr.es/m/20220524235250.gtt3uu5zktfkr4hv@alap3.anarazel.de
If a short description is specified as NULL in one of the various
DefineCustomXXXVariable() functions available to external modules to
define a custom parameter, SHOW ALL would crash. This change teaches
SHOW ALL to properly handle NULL short descriptions, as well as any code
paths that manipulate it, to gain in flexibility. Note that
help_config.c was already able to do that, when describing a set of GUCs
for postgres --describe-config.
Author: Steve Chavez
Reviewed by: Nathan Bossart, Andres Freund, Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/CAGRrpzY6hO-Kmykna_XvsTv8P2DshGiU6G3j8yGao4mk0CqjHA%40mail.gmail.com
Backpatch-through: 10
9d9c02ccd added code to allow the executor to take shortcuts when quals
on monotonic window functions guaranteed that once the qual became false
it could never become true again. When possible, baserestrictinfo quals
are converted to become these quals, which we call run conditions.
Unfortunately, in 9d9c02ccd, I forgot to update
remove_unused_subquery_outputs to teach it about these run conditions.
This could cause a WindowFunc column which was unused in the target list
but referenced by an upper-level WHERE clause to be removed from the
subquery when the qual in the WHERE clause was converted into a window run
condition. Because of this, the entire WindowClause would be removed from
the query resulting in additional rows making it into the resultset when
they should have been filtered out by the WHERE clause.
Here we fix this by recording which target list items in the subquery have
run conditions. That gets passed along to remove_unused_subquery_outputs
to tell it not to remove these items from the target list.
Bug: #17495
Reported-by: Jeremy Evans
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/17495-7ffe2fa0b261b9fa@postgresql.org
Commits a59c79564 et al. tried to sync libpq's SSL key file
permissions checks with what we've used for years in the backend.
We did not intend to create any new failure cases, but it turns out
we did: restricting the key file's ownership breaks cases where the
client is allowed to read a key file despite not having the identical
UID. In particular a client running as root used to be able to read
someone else's key file; and having seen that I suspect that there are
other, less-dubious use cases that this restriction breaks on some
platforms.
We don't really need an ownership check, since if we can read the key
file despite its having restricted permissions, it must have the right
ownership --- under normal conditions anyway, and the point of this
patch is that any additional corner cases where that works should be
deemed allowable, as they have been historically. Hence, just drop
the ownership check, and rearrange the permissions check to get rid
of its faulty assumption that geteuid() can't be zero. (Note that the
comparable backend-side code doesn't have to cater for geteuid() == 0,
since the server rejects that very early on.)
This does have the end result that the permissions safety check used
for a root user's private key file is weaker than that used for
anyone else's. While odd, root really ought to know what she's doing
with file permissions, so I think this is acceptable.
Per report from Yogendra Suralkar. Like the previous patch,
back-patch to all supported branches.
Discussion: https://postgr.es/m/MW3PR15MB3931DF96896DC36D21AFD47CA3D39@MW3PR15MB3931.namprd15.prod.outlook.com
Foreign tables can be partitioned, but previous documentation commits
left the syntax synopsis both incomplete and incorrect.
Justin Pryzby and Amit Langote
Discussion: http://postgr.es/m/20220521130922.GX19626@telsasoft.com
repeat() checked for integer overflow during its calculation of the
required output space, but it just passed the resulting integer to
palloc(). This meant that result sizes between 1GB and 2GB led to
ERRCODE_INTERNAL_ERROR, "invalid memory alloc request size" rather
than ERRCODE_PROGRAM_LIMIT_EXCEEDED, "requested length too large".
That seems like a bit of a wart, so add an explicit AllocSizeIsValid
check to make these error cases uniform.
Do likewise in the sibling functions lpad() etc. While we're here,
also modernize their overflow checks to use pg_mul_s32_overflow() etc
instead of expensive divisions.
Per complaint from Japin Li. This is basically cosmetic, so I don't
feel a need to back-patch.
Discussion: https://postgr.es/m/ME3P282MB16676ED32167189CB0462173B6D69@ME3P282MB1667.AUSP282.PROD.OUTLOOK.COM
Mistake in 5891c7a8ed, likely made when switching the default value from none
to fetch during development.
Reported-By: Nathan Bossart <nathandbossart@gmail.com>
Author: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/20220524220147.GA1298892@nathanxps13
"\r" (for progress output) must not be inside a translatable string
(gettext gets upset).
In passing, move the minimum supported version number to a separate
argument, so that we don't have to retranslate this string every year
now.
The changes to show all query results (7844c9918) broke \timing output
in case of an error; it didn't update the timing result and showed
0.000 ms.
Fix by updating the timing result also in the error case. Also, for
robustness, update the timing result any time a result is obtained,
not only for the last, so a sensible value is always available.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Richard Guo <guofenglinux@gmail.com>
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/3813350.1652111765%40sss.pgh.pa.us
On slow machines the modified test could end up switching the order in which
transactional stats are reported in one session and non-transactional stats in
another session. As stats handling of truncate is implemented as setting
live/dead rows 0, the order in which a truncate's stats changes are applied,
relative to normal stats updates, matters. The handling of stats for truncate
hasn't changed due to shared memory stats, this is longstanding behavior.
We might want to improve truncate's stats handling in the future, but for now
just change the order of forced flushed to make the test stable.
Reported-By: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/YoZf7U/WmfmFYFEx@msg.df7cb.de