It has been reported that PL/Tcl built on macOS with GCC >=11 crashes.
The reason is that there is a hash_search() function in the operating
system's libraries, and that ends up being called instead of the one
in postgres. This has something to do with how the linker resolves
references between the various possibilities it has been given, and
somehow something changed that it is now picking that one in this
configuration.
We found that removing the -lc from the link command line fixes this
problem. The -lc was introduced a long time ago in commit
e3909672f1, and we think the reasons
might be obsolete, so we decided that we'll try to just remove it and
see if any problems arise.
Discussion: https://www.postgresql.org/message-id/flat/a78c847a-4f79-9286-be99-e819e9e4139e%40enterprisedb.com
When rebuilding the relation mapping on subscribers, we were not releasing
the attribute mapping's memory which was no longer required.
The attribute mapping used in logical tuple conversion was refactored in
PG13 (by commit e1551f96e6) but we forgot to update the related code that
frees the attribute map.
Author: Hou Zhijie
Reviewed-by: Amit Langote, Amit Kapila, Shi yu
Backpatch-through: 10, where it was introduced
Discussion: https://postgr.es/m/OSZPR01MB6310F46CD425A967E4AEF736FDA49@OSZPR01MB6310.jpnprd01.prod.outlook.com
072132f0 used the attnum offset to access the raw_fields array when
checking that the attribute names of the header and of the relation
match, leading to incorrect results or even crashes if the attribute
numbers of a relation are changed, like on a dropped attribute. This
fixes the logic to use the correct attribute names for the header
matching requirements.
Also, this commit disallows HEADER MATCH in COPY TO as there is no
validation that can be done in this case.
The tests are expanded for HEADER MATCH with COPY FROM and dropped
columns, with cases where a relation has a dropped and re-added column,
as well as a reduced set of columns.
Author: Julien Rouhaud
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/20220607154744.vvmitnqhyxrne5ms@jrouhaud
Second thoughts about 9cd43f6cb: given that we're staying bug-compatible
with the old behavior of using double not single quotes for extension
versions, we can simplify this completion code by pretending that
extension versions *are* identifiers, and not using VERBATIM. Then
_complete_from_query() will think that the query results are identifiers
in need of quoting, and we end up with the same behavior as before.
This doesn't work for Query_for_list_of_available_extension_versions_with_TO,
but let's just drop that: there is no other place where we handle
multi-keyword phrases that way, and it doesn't seem very desirable here
either. Handle completion of "UPDATE TO" in our more usual pattern.
Discussion: https://postgr.es/m/CAMkU=1yV+egSYrzWvbDY8VZ6bKEMrKbzxr-HTuiHi+wDgSUMgA@mail.gmail.com
We build the partition map entries on subscribers while applying the
changes for update/delete on partitions. The component relation in each
entry is closed after its use so we need to update it on successive use of
cache entries.
This problem was there since the original commit f1ac27bfda that
introduced this code but we didn't notice it till the recent commit
26b3455afa started to use the component relation of partition map cache
entry.
Reported-by: Tom Lane, as per buildfarm
Author: Amit Langote, Hou Zhijie
Reviewed-by: Amit Kapila, Shi Yu
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/OSZPR01MB6310F46CD425A967E4AEF736FDA49@OSZPR01MB6310.jpnprd01.prod.outlook.com
In logical replication, we will check if the target table on the
subscriber is updatable by comparing the replica identity of the table on
the publisher with the table on the subscriber. When the target table is a
partitioned table, we only check its replica identity but not for the
partition tables. This leads to assertion failure while applying changes
for update/delete as we expect those to succeed only when the
corresponding partition table has a primary key or has a replica
identity defined.
Fix it by checking the replica identity of the partition table while
applying changes.
Reported-by: Shi Yu
Author: Shi Yu, Hou Zhijie
Reviewed-by: Amit Langote, Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/OSZPR01MB6310F46CD425A967E4AEF736FDA49@OSZPR01MB6310.jpnprd01.prod.outlook.com
In 02b8048ba I (tgl) got rid of the need for most tab-completion queries
to return pre-quoted identifiers. But I over-hastily removed the
quote_ident call from Query_for_list_of_available_extension_versions*
too; those still need it, because what is returned isn't an identifier
at all and will (almost?) always need quoting.
Arguably we should use quote_literal here instead. But quote_ident
works too and people may be used to that behavior, so stick with it.
In passing, fix inconsistent omission of schema-qualification in
Query_for_list_of_encodings. That's not a security issue per our
current guidelines, but it ought to be like the rest.
Jeff Janes
Discussion: https://postgr.es/m/CAMkU=1yV+egSYrzWvbDY8VZ6bKEMrKbzxr-HTuiHi+wDgSUMgA@mail.gmail.com
This reverts commits 5753d4ee32 and fe60b67250 that modified HOT to
ignore BRIN indexes. The commit message for 5753d4ee32 claims that:
When determining whether an index update may be skipped by using
HOT, we can ignore attributes indexed only by BRIN indexes. There
are no index pointers to individual tuples in BRIN, and the page
range summary will be updated anyway as it relies on visibility
info.
This is partially incorrect - it's true BRIN indexes don't point to
individual tuples, so HOT chains are not an issue, but the visibitlity
info is not sufficient to keep the index up to date. This can easily
result in corrupted indexes, as demonstrated in the hackers thread.
This does not mean relaxing the HOT restrictions for BRIN is a lost
cause, but it needs to handle the two aspects (allowing HOT chains and
updating the page range summaries) as separate. But that requires a
major changes, and it's too late for that in the current dev cycle.
Reported-by: Tomas Vondra
Discussion: https://postgr.es/m/05ebcb44-f383-86e3-4f31-0a97a55634cf@enterprisedb.com
We were not updating the partition map cache in the subscriber even when
the corresponding remote rel is changed. Due to this data was getting
incorrectly replicated for partition tables after the publisher has
changed the table schema.
Fix it by resetting the required entries in the partition map cache after
receiving a new relation mapping from the publisher.
Reported-by: Shi Yu
Author: Shi Yu, Hou Zhijie
Reviewed-by: Amit Langote, Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/OSZPR01MB6310F46CD425A967E4AEF736FDA49@OSZPR01MB6310.jpnprd01.prod.outlook.com
This reverts the changes to pg_upgrade's Makefile and .gitignore done in
15b6d21. The TAP tests run in isolation, executing pg_upgrade in
tmp_check/ in the build directory so as any files created in the
execution path (reindex_hash.sql and delete_old_cluster.{sh,bat}) are
never in the tree, so entries are not necessary in this case. However,
not having these impacts the cleanliness of the code tree when running
./pg_upgrade directly from src/bin/pg_upgrade/.
This commit adds back to .gitignore all the files generated in the
execution path, and the Makefile rule to clean them up if they exist.
Per gripe from Tom Lane.
Discussion: https://postgr.es/m/90595.1655227384@sss.pgh.pa.us
While building a new attrmap which maps partition attribute numbers to
remoterel's, we incorrectly update the map for dropped column attributes.
Later, it caused cache look-up failure when we tried to use the map to
fetch the information about attributes.
This also fixes the partition map cache invalidation which was using the
wrong type cast to fetch the entry. We were using stale partition map
entry after invalidation which leads to the assertion or cache look-up
failure.
Reported-by: Shi Yu
Author: Hou Zhijie, Shi Yu
Reviewed-by: Amit Langote, Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/OSZPR01MB6310F46CD425A967E4AEF736FDA49@OSZPR01MB6310.jpnprd01.prod.outlook.com
This commit, in completion of 157f873, forces a ROLLBACK for
--single-transaction only when ON_ERROR_STOP is used when one of the
steps defined by -f/-c fails. Hence, COMMIT is always used when
ON_ERROR_STOP is not set, ignoring the status code of the last action
taken in the set of switches specified by -c/-f (previously ROLLBACK
would have been issued even without ON_ERROR_STOP if the last step
failed, while COMMIT was issued if a step in-between failed as long as
the last step succeeded, leading to more inconsistency).
While on it, this adds much more test coverage in this area when not
using ON_ERROR_STOP with multiple switch patterns involving -c and -f
for query files, single queries and slash commands.
The behavior of ON_ERROR_STOP is arguably a bug, but there was no much
support for a backpatch to force a ROLLBACK on a step failure, so this
change is done only on HEAD for now.
Per discussion with Tom Lane and Kyotaro Horiguchi.
Discussion: https://postgr.es/m/Yqbc8bAdwnP02na4@paquier.xyz
If an application executed operations like EXEC SQL PREPARE
without having first established a database connection, it could
get a core dump instead of the expected clean failure. This
occurred because we did "pthread_getspecific(actual_connection_key)"
without ever having initialized the TSD key actual_connection_key.
The results of that are probably platform-specific, but at least
on Linux it often leads to a crash.
To fix, add calls to ecpg_pthreads_init() in the code paths that
might use actual_connection_key uninitialized. It's harmless
(and hopefully inexpensive) to do that more than once.
Per bug #17514 from Okano Naoki. The problem's ancient, so
back-patch to all supported branches.
Discussion: https://postgr.es/m/17514-edd4fad547c5692c@postgresql.org
Use the same error message for all cases of pathname overrun,
since users aren't going to much care which one was too long.
Add missing newline to said error (as pg_upgrade's version
of pg_fatal requires that).
Add pathname overrun checks for the individual log files,
not just the directories.
Remove initial newline in log files; the new scheme here
guarantees that we'll never be appending to an old file.
Kyotaro Horiguchi and Tom Lane
Discussion: https://postgr.es/m/20220613.120551.729848632120189555.horikyota.ntt@gmail.com
Recent additions to the subscription tests check for log entries, but
fail to account for the possible presence of an SQL errror code, which
happens if log_error_verbosity is set to 'verbose'. Add this into the
regular expressions that are checked for.
In commit ec62cb0aa, I foolishly replaced ExecEvalWholeRowVar's
lookup_rowtype_tupdesc_domain call with just lookup_rowtype_tupdesc,
because I didn't see how a domain could be involved there, and
there were no regression test cases to jog my memory. But the
existing code was correct, so revert that change and add a test
case showing why it's necessary. (Note: per comment in struct
DatumTupleFields, it is correct to produce an output tuple that's
labeled with the base composite type, not the domain; hence just
blindly looking through the domain is correct here.)
Per bug #17515 from Dan Kubb. Back-patch to v11 where domains over
composites became a thing.
Discussion: https://postgr.es/m/17515-a24737438363aca0@postgresql.org
This function can be called from mark_async_capable_plan(), a helper
function for create_append_plan(), before set_subqueryscan_references(),
to determine the triviality of a SubqueryScan that is a child of an
Append plan node, which is done before doing finalize_plan() on the
SubqueryScan (if necessary) and set_plan_references() on the subplan,
unlike when called from set_subqueryscan_references(). The reason why
this is safe wouldn't be that obvious, so add comments explaining this.
Follow-up for commit c2bb02bc2.
Reviewed by Zhihong Yu.
Discussion: https://postgr.es/m/CAPmGK17%2BGiJBthC6va7%2B9n6t75e-M1N0U18YB2G1B%2BE5OdrNTA%40mail.gmail.com
The new show-all-results feature in psql (7844c9918) went out of its
way to show notices next to the results of the statements (in a
multi-statement string) that caused them. This also had the
consequence that notices for a single statement were not shown until
after the statement had executed, instead of right away. After some
discussion, it seems very difficult to satisfy both of these goals, so
here we are giving up on the first goal and just show the notices as
we get them. This restores the pre-7844c9918 behavior for notices.
Reported-by: Alastair McKinley <a.mckinley@analyticsengines.com>
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/PAXPR02MB760039506C87A2083AD85575E3DA9%40PAXPR02MB7600.eurprd02.prod.outlook.com
The original advice for hard-wired SetConfigOption calls was to use
PGC_S_OVERRIDE, particularly for PGC_INTERNAL GUCs. However,
that's really overkill for PGC_INTERNAL GUCs, since there is no
possibility that we need to override a user-provided setting.
Instead use PGC_S_DYNAMIC_DEFAULT in most places, so that the
value will appear with source = 'default' in pg_settings and thereby
not be shown by psql's new \dconfig command. The one exception is
that when changing in_hot_standby in a hot-standby session, we still
use PGC_S_OVERRIDE, because people felt that seeing that in \dconfig
would be a good thing.
Similarly use PGC_S_DYNAMIC_DEFAULT for the auto-tune value of
wal_buffers (if possible, that is if wal_buffers wasn't explicitly
set to -1), and for the typical 2MB value of max_stack_depth.
In combination these changes remove four not-very-interesting
entries from the typical output of \dconfig, all of which people
fingered as "why is that showing up?" in the discussion thread.
Discussion: https://postgr.es/m/3118455.1649267333@sss.pgh.pa.us
Some locales use a comma as decimal separator (like Czech or French),
and psql's 001_basic.pl for \timing was not able to handle that
properly. This fixes the matching regexes to be able to handle both
comma and dot as possible decimal separators, as per a suggestion from
Andrew Dunstan.
psql tests were the only place with such a portability issue
(check-world passed here with a forced LANG/LANGUAGE). These tests are
new as of c0280bc, so there is no need for a backpatch.
Reported-by: Pavel Stehule
Discussion: https://postgr.es/m/CAFj8pRBz8iQmd2aOaCLvO-rJY6vZr-h6Q0qvV0J+yb78J7uiaA@mail.gmail.com
38bfae3 has moved the contents written to files by pg_upgrade under a
new directory called pg_upgrade_output.d/ located in the new cluster's
data folder, and it used a simple structure made of two subdirectories
leading to a fixed structure: log/ and dump/. This design has made
weaker pg_upgrade on repeated calls, as we could get failures when
creating one or more of those directories, while potentially losing the
logs of a previous run (logs are retained automatically on failure, and
cleaned up on success unless --retain is specified). So a user would
need to clean up pg_upgrade_output.d/ as an extra step for any repeated
calls of pg_upgrade. The most common scenario here is --check followed
by the actual upgrade, but one could see a failure when specifying an
incorrect input argument value. Removing entirely the logs would have
the disadvantage of removing all the past information, even if --retain
was specified at some past step.
This result is annoying for a lot of users and automated upgrade flows.
So, rather than requiring a manual removal of pg_upgrade_output.d/, this
redesigns the set of output directories in a more dynamic way, based on
a suggestion from Tom Lane and Daniel Gustafsson. pg_upgrade_output.d/
is still the base path, but a second directory level is added, mostly
named after an ISO-8601-formatted timestamp (in short human-readable,
with milliseconds appended to the name to avoid any conflicts). The
logs and dumps are saved within the same subdirectories as previously,
as of log/ and dump/, but these are located inside the subdirectory
named after the timestamp.
The logs of a given run are removed only after a successful run if
--retain is not used, and pg_upgrade_output.d/ is kept if there are any
logs from a previous run. Note that previously, pg_upgrade would have
kept the logs even after a successful --check but that was inconsistent
compared to the case without --check when using --retain. The code in
charge of the removal of the output directories is now refactored into a
single routine.
Two TAP tests are added with some --check commands (one failure case and
one success case), to look after the issue fixed here. Note that the
tests had to be tweaked a bit to fit with the new directory structure so
as it can find any logs generated on failure. This is still going to
require a change in the buildfarm client for the case where pg_upgrade
is tested without the TAP test, though, but I'll tackle that with a
separate patch where needed.
Reported-by: Tushar Ahuja
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson, Justin Pryzby
Discussion: https://postgr.es/m/77e6ecaa-2785-97aa-f229-4b6e047cbd2b@enterprisedb.com
Bug #17512 highlighted that a suitably broken data type could cause the
backend to crash if either the hash function or equality function were in
someway non-deterministic based on their input values. Such a data type
could cause a crash of the backend due to some code which assumes that
we'll always find a hash table entry corresponding to an item in the
Memoize LRU list.
Here we remove the assumption that we'll always find the entry
corresponding to the given LRU list item and add run-time checks to verify
we have found the given item in the cache.
This is not a fix for bug #17512, but it will turn the crash reported by
that bug report into an internal ERROR.
Reported-by: Ales Zeleny
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpxFSTwvoYWT7kmFVSZ9zLAeHb=S9vrz=RExMgSkQNWqw@mail.gmail.com
Backpatch-through: 14, where Memoize was added.
pg_stat_get_subscription scanned one more LogicalRepWorker array entry
than is really allocated. In the worst case this could lead to SIGSEGV,
if the LogicalRepCtx data structure is near the end of shared memory.
That seems quite unlikely though (thanks to the ordering of calls in
CreateSharedMemoryAndSemaphores) and we've heard no field reports of it.
A more likely misbehavior is one row of garbage data in the function's
result, but even that is not real likely because of the check that the
pid field matches some live backend.
Report and fix by Kuntal Ghosh. This bug is old, so back-patch
to all supported branches.
Discussion: https://postgr.es/m/CAGz5QCJykEDzW6jQK6Yz7Qh_PMtD=95de_7QoocbVR2Qy8hWZA@mail.gmail.com
An error PGresult generated by libpq itself, such as a report of
connection loss, won't have broken-down error fields.
should_processing_continue() blithely assumed that
PG_DIAG_SEVERITY_NONLOCALIZED would always be present, and would
dump core if it wasn't.
Per grepping to see if 6d157e7cb's mistake was repeated elsewhere.
An error PGresult generated by libpq itself, such as a report of
connection loss, won't have broken-down error fields.
ecpg_raise_backend() blithely assumed that PG_DIAG_MESSAGE_PRIMARY
would always be present, and would end up passing a NULL string
pointer to snprintf when it isn't. That would typically crash
before 3779ac62d, and it would fail to provide a useful error report
in any case. Best practice is to substitute PQerrorMessage(conn)
in such cases, so do that.
Per bug #17421 from Masayuki Hirose. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/17421-790ff887e3188874@postgresql.org
psql --single-transaction is able to handle multiple -c and -f switches
in a single transaction since d5563d7d, but this had the surprising
behavior of forcing a transaction COMMIT even if psql failed with an
error in the client (for example incorrect path given to \copy), which
would generate an error, but still commit any changes that were already
applied in the backend. This commit makes the behavior more consistent,
by enforcing a transaction ROLLBACK if any commands fail, both
client-side and backend-side, so as no changes are applied if one error
happens in any of them.
Some tests are added on HEAD to provide some coverage about all that.
Backend-side errors are unreliable as IPC::Run can complain on SIGPIPE
if psql quits before reading a query result, but that should work
properly in the case where any errors come from psql itself, which is
what the original report is about.
Reported-by: Christoph Berg
Author: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/17504-76b68018e130415e@postgresql.org
Backpatch-through: 10
The hard-wired PageOutput arguments in usage() and sibling functions
have been a perennial maintenance gotcha, and there's no reason to
think we'll ever get any better about that. Let's get rid of those
magic constants by constructing the output in a buffer where we can
count the newlines before calling PageOutput. (Perhaps this is
microscopically slower; but none of these functions are performance
critical, and anyway we might well be buying back all the cost by
avoiding having to pass most of the data through snprintf.c. I could
not detect any speed difference in a desultory check.) This also
gets rid of the need to assume that platform-specific variations in
the output are insignificant.
While at it, make the code shorter and more abstract by inventing
helper macros HELP0() and HELPN() to encapsulate the specific
output actions being invoked.
Discussion: https://postgr.es/m/365160.1654289490@sss.pgh.pa.us
TAP tests are run from their own directory in the source tree, and in a
VPATH build the execution of the pg_upgrade command was leaving behind a
file in the source tree, that should be left untouched. In order to
avoid this issue, the test moves to PostgreSQL::Test::Utils::tmp_check,
so as any files generated by pg_upgrade do not impact the source tree,
but the build tree. This has as nice side-effect to make unnessary the
presence of such files in pg_upgrade's .gitignore and Makefile. This
strategy is similar to psql's test 010_tab_completion.pl, though the
reasons behind this choice are different.
In passing, fix one misleading test name that was added by 99f6f19.
Per discussion with Peter Eisentraut, Andrew Dunstan, Tom Lane, Andres
Freund and myself.
Discussion: https://postgr.es/m/f80ace33-11fb-1cd3-20f8-98f51d151088@enterprisedb.com
Provide a gloss of which command does what, as all other backslash
commands have. Put the large-object command section into a more
considered spot in the list.
In passing, update the output-lines count in helpVariables()
(oversight in 7844c9918, looks like).
Thibaud Walkowiak, reviewed by Nathan Bossart and myself
Discussion: https://postgr.es/m/43f0439c-df3e-a045-ac99-af33523cc2d4@dalibo.com
Currently, we simply combine the column lists when publishing tables on
multiple publications and that can sometimes lead to unexpected behavior.
Say, if a column is published in any row-filtered publication, then the
values for that column are sent to the subscriber even for rows that don't
match the row filter, as long as the row matches the row filter for any
other publication, even if that other publication doesn't include the
column.
The main purpose of introducing a column list is to have statically
different shapes on publisher and subscriber or hide sensitive column
data. In both cases, it doesn't seem to make sense to combine column
lists.
So, we disallow the cases where the column list is different for the same
table when combining publications. It can be later extended to combine the
column lists for selective cases where required.
Reported-by: Alvaro Herrera
Author: Hou Zhijie
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/202204251548.mudq7jbqnh7r@alvherre.pgsql
Since a117cebd6, some older gcc versions issue "variable may be used
uninitialized in this function" complaints for brin_summarize_range.
Silence that using the same coding pattern as in bt_index_check_internal;
arguably, a117cebd6 had too narrow a view of which compilers might give
trouble.
Nathan Bossart and Tom Lane. Back-patch as the previous commit was.
Discussion: https://postgr.es/m/20220601163537.GA2331988@nathanxps13
Perl 5.36 has reclassified the warning condition that this test
case used, so that the expected error fails to appear. Tweak
the test so it instead exercises a case that's handled the same
way in all Perl versions of interest.
This appears to meet our standards for back-patching into
out-of-support branches: it changes no user-visible behavior
but enables testing of old branches with newer tools.
Hence, back-patch as far as 9.2.
Dagfinn Ilmari Mannsåker, per report from Jitka Plesníková.
Discussion: https://postgr.es/m/564579.1654093326@sss.pgh.pa.us
This reverts commit d9d076222f "VACUUM: ignore indexing operations
with CONCURRENTLY".
These changes caused indexes created with the CONCURRENTLY option to
miss heap tuples that were HOT-updated and HOT-pruned during the index
creation. Before these changes, HOT pruning would have been prevented
by the Xmin of the transaction creating the index, but because this
change was precisely to allow the Xmin to move forward ignoring that
backend, now other backends scanning the table can prune them. This is
not a problem for VACUUM (which requires a lock that conflicts with a
CREATE INDEX CONCURRENTLY operation), but HOT-prune can definitely
occur. In other words, Xmin advancement was sped up, but at the cost of
corrupting the resulting index.
Regrettably, this means that the new feature in PG14 that RIC/CIC on
very large tables no longer force VACUUM to retain very old tuples goes
away. We might try to implement it again in a later release, but for
now the risk of indexes missing tuples is too high and there's no easy
fix.
Backpatch to 14, where this change appeared.
Reported-by: Peter Slavov <pet.slavov@gmail.com>
Diagnosys-by: Andrey Borodin <x4mmm@yandex-team.ru>
Diagnosys-by: Michael Paquier <michael@paquier.xyz>
Diagnosys-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/17485-396609c6925b982d%40postgresql.org
We hadn't noticed this because (a) few people feed invalid
timezone abbreviation files to the server, and (b) in typical
scenarios guc.c would throw ereport(ERROR) and then transaction
abort handling would silently clean up the leaked file reference.
However, it was possible to observe file leakage warnings if one
breaks an already-active abbreviation file, because guc.c does
not throw ERROR when loading supposedly-validated settings during
session start or SIGHUP processing.
Report and fix by Kyotaro Horiguchi (cosmetic adjustments by me)
Discussion: https://postgr.es/m/20220530.173740.748502979257582392.horikyota.ntt@gmail.com
With the old logic, when the reciever had not yet attached, we would
never call shm_mq_inc_bytes_written(), even if force_flush = true
was specified. That could result in a situation where data that the
sender believes it has sent is never received.
Along the way, remove a useless function prototype for a nonexistent
function from shm_mq.h.
Commit 46846433a0 introduced these
problems.
Pavan Deolasee, with a few changes by me.
Discussion: https://postgr.es/m/CABOikdPkwtLLCTnzzmpSMXo3QZa2yXq0J7Q61ssdLFAJYrOVvQ@mail.gmail.com
foreign_data has kept around a set of tests for TRUNCATE to look after
the case of foreign tables, with[out] inheritance and with[out]
partitions, assuming that the command is not supported for this relkind.
However, TRUNCATE is supported on foreign tables if the FDW involved is
able to handle the command, like postgres_fdw.
Note that postgres_fdw includes tests to cover all the cases removed by
this commit (which had misleading comments), so these did not provide
any additional coverage anyway.
Author: Yugo Nagata
Discussion: https://postgr.es/m/20220527172543.0a2fdb469cf048b81c0967d3@sraoss.co.jp
Commit 1a36bc9db (SQL/JSON query functions) introduced STRING as a
type_func_name_keyword, thereby breaking applications that use
"string" as a table name, column name, function parameter name, etc.
That seems like a pretty bad thing, not least because the SQL spec
says that STRING is an unreserved keyword.
This is easy enough to fix so far as the core grammar is concerned.
However, doing so causes some ECPG test cases to fail, specifically
those that use "string" as a typedef name. It turns out this is
because portions of the ECPG grammar allow type_func_name_keywords
but not unreserved_keywords as typedef names. That's pretty horrid,
and it's mildly astonishing that we've not heard complaints about it
before. We can fix two of those uses trivially, but the ones in the
var_type production are less easy. As a stopgap, hard-code STRING as
an allowed alternative in var_type.
Per report from Alastair McKinley.
Discussion: https://postgr.es/m/3661437.1653855582@sss.pgh.pa.us
In the codepath when no encoding conversion is required, the check for
incomplete character at the end of input incorrectly used server
encoding's max character length, instead of the client's. Usually the
server and client encodings are the same when we're not performing
encoding conversion, but SQL_ASCII is an exception.
In the passing, also fix some outdated comments that still talked about
the old COPY protocol. It was removed in v14.
Per bug #17501 from Vitaly Voronov. Backpatch to v14 where this was
introduced.
Discussion: https://www.postgresql.org/message-id/17501-128b1dd039362ae6@postgresql.org
This reverts commit 06f5295 as per issues with this approach, both in
terms of efficiency impact and stability. First, contrary to the
single-item cache for transaction IDs in transam.c, the cache may finish
by not be hit for a long time, and without an invalidation mechanism to
clear it, it would cause inconsistent results on wraparound for
example. Second, the use of SubTransGetTopmostTransaction() for the
caching has a limited impact on performance. SubTransGetParent() could
have more impact, though the benchmarking of the single-item approach
still needs to be proved, particularly under the conditions where SLRU
lookups are stressed in parallel with overflowed snapshots (aka more
than 64 subxids generated, for example).
After discussion with Andres Freund.
Discussion: https://postgr.es/m/20220524235250.gtt3uu5zktfkr4hv@alap3.anarazel.de
If a short description is specified as NULL in one of the various
DefineCustomXXXVariable() functions available to external modules to
define a custom parameter, SHOW ALL would crash. This change teaches
SHOW ALL to properly handle NULL short descriptions, as well as any code
paths that manipulate it, to gain in flexibility. Note that
help_config.c was already able to do that, when describing a set of GUCs
for postgres --describe-config.
Author: Steve Chavez
Reviewed by: Nathan Bossart, Andres Freund, Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/CAGRrpzY6hO-Kmykna_XvsTv8P2DshGiU6G3j8yGao4mk0CqjHA%40mail.gmail.com
Backpatch-through: 10
9d9c02ccd added code to allow the executor to take shortcuts when quals
on monotonic window functions guaranteed that once the qual became false
it could never become true again. When possible, baserestrictinfo quals
are converted to become these quals, which we call run conditions.
Unfortunately, in 9d9c02ccd, I forgot to update
remove_unused_subquery_outputs to teach it about these run conditions.
This could cause a WindowFunc column which was unused in the target list
but referenced by an upper-level WHERE clause to be removed from the
subquery when the qual in the WHERE clause was converted into a window run
condition. Because of this, the entire WindowClause would be removed from
the query resulting in additional rows making it into the resultset when
they should have been filtered out by the WHERE clause.
Here we fix this by recording which target list items in the subquery have
run conditions. That gets passed along to remove_unused_subquery_outputs
to tell it not to remove these items from the target list.
Bug: #17495
Reported-by: Jeremy Evans
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/17495-7ffe2fa0b261b9fa@postgresql.org
Commits a59c79564 et al. tried to sync libpq's SSL key file
permissions checks with what we've used for years in the backend.
We did not intend to create any new failure cases, but it turns out
we did: restricting the key file's ownership breaks cases where the
client is allowed to read a key file despite not having the identical
UID. In particular a client running as root used to be able to read
someone else's key file; and having seen that I suspect that there are
other, less-dubious use cases that this restriction breaks on some
platforms.
We don't really need an ownership check, since if we can read the key
file despite its having restricted permissions, it must have the right
ownership --- under normal conditions anyway, and the point of this
patch is that any additional corner cases where that works should be
deemed allowable, as they have been historically. Hence, just drop
the ownership check, and rearrange the permissions check to get rid
of its faulty assumption that geteuid() can't be zero. (Note that the
comparable backend-side code doesn't have to cater for geteuid() == 0,
since the server rejects that very early on.)
This does have the end result that the permissions safety check used
for a root user's private key file is weaker than that used for
anyone else's. While odd, root really ought to know what she's doing
with file permissions, so I think this is acceptable.
Per report from Yogendra Suralkar. Like the previous patch,
back-patch to all supported branches.
Discussion: https://postgr.es/m/MW3PR15MB3931DF96896DC36D21AFD47CA3D39@MW3PR15MB3931.namprd15.prod.outlook.com
repeat() checked for integer overflow during its calculation of the
required output space, but it just passed the resulting integer to
palloc(). This meant that result sizes between 1GB and 2GB led to
ERRCODE_INTERNAL_ERROR, "invalid memory alloc request size" rather
than ERRCODE_PROGRAM_LIMIT_EXCEEDED, "requested length too large".
That seems like a bit of a wart, so add an explicit AllocSizeIsValid
check to make these error cases uniform.
Do likewise in the sibling functions lpad() etc. While we're here,
also modernize their overflow checks to use pg_mul_s32_overflow() etc
instead of expensive divisions.
Per complaint from Japin Li. This is basically cosmetic, so I don't
feel a need to back-patch.
Discussion: https://postgr.es/m/ME3P282MB16676ED32167189CB0462173B6D69@ME3P282MB1667.AUSP282.PROD.OUTLOOK.COM
Mistake in 5891c7a8ed, likely made when switching the default value from none
to fetch during development.
Reported-By: Nathan Bossart <nathandbossart@gmail.com>
Author: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/20220524220147.GA1298892@nathanxps13
"\r" (for progress output) must not be inside a translatable string
(gettext gets upset).
In passing, move the minimum supported version number to a separate
argument, so that we don't have to retranslate this string every year
now.
The changes to show all query results (7844c9918) broke \timing output
in case of an error; it didn't update the timing result and showed
0.000 ms.
Fix by updating the timing result also in the error case. Also, for
robustness, update the timing result any time a result is obtained,
not only for the last, so a sensible value is always available.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Richard Guo <guofenglinux@gmail.com>
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/3813350.1652111765%40sss.pgh.pa.us
On slow machines the modified test could end up switching the order in which
transactional stats are reported in one session and non-transactional stats in
another session. As stats handling of truncate is implemented as setting
live/dead rows 0, the order in which a truncate's stats changes are applied,
relative to normal stats updates, matters. The handling of stats for truncate
hasn't changed due to shared memory stats, this is longstanding behavior.
We might want to improve truncate's stats handling in the future, but for now
just change the order of forced flushed to make the test stable.
Reported-By: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/YoZf7U/WmfmFYFEx@msg.df7cb.de
ruleutils.c was coded to suppress the AS label for a SELECT output
expression if the column name is "?column?", which is the parser's
fallback if it can't think of something better. This is fine, and
avoids ugly clutter, so long as (1) nothing further up in the parse
tree relies on that column name or (2) the same fallback would be
assigned when the rule or view definition is reloaded. Unfortunately
(2) is far from certain, both because ruleutils.c might print the
expression in a different form from how it was originally written
and because FigureColname's rules might change in future releases.
So we shouldn't rely on that.
Detecting exactly whether there is any outer-level use of a SELECT
column name would be rather expensive. This patch takes the simpler
approach of just passing down a flag indicating whether there *could*
be any outer use; for example, the output column names of a SubLink
are not referenceable, and we also do not care about the names exposed
by the right-hand side of a setop. This is sufficient to suppress
unwanted clutter in all but one case in the regression tests. That
seems like reasonable evidence that it won't be too much in users'
faces, while still fixing the cases we need to fix.
Per bug #17486 from Nicolas Lutic. This issue is ancient, so
back-patch to all supported branches.
Discussion: https://postgr.es/m/17486-1ad6fd786728b8af@postgresql.org
Several places in the planner tried to clamp a double value to fit
in a "long" by doing
(long) Min(x, (double) LONG_MAX);
This is subtly incorrect, because it casts LONG_MAX to double and
potentially back again. If long is 64 bits then the double value
is inexact, and the platform might round it up to LONG_MAX+1
resulting in an overflow and an undesirably negative output.
While it's not hard to rewrite the expression into a safe form,
let's put it into a common function to reduce the risk of someone
doing it wrong in future.
In principle this is a bug fix, but since the problem could only
manifest with group count estimates exceeding 2^63, it seems unlikely
that anyone has actually hit this or will do so anytime soon. We're
fixing it mainly to satisfy fuzzer-type tools. That being the case,
a HEAD-only fix seems sufficient.
Andrey Lepikhov
Discussion: https://postgr.es/m/ebbc2efb-7ef9-bf2f-1ada-d6ec48f70e58@postgrespro.ru
This is based on a set of suggestions from Noah, with the following
changes made:
- The set of databases created in the tests are now prefixed with
"regression" to not trigger any warnings with name restrictions when
compiling the code with -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS, and
now only the first name checks after the Windows case of double quotes
mixed with backslashes.
- Fix an issue with EXTRA_REGRESS_OPTS, which were not processed in a
way consistent with 027_stream_regress.pl (missing space between the
option string and pg_regress). This got introduced in 7dd3ee5.
- Add a check on the exit code of the pg_regress command, to catch
failures after running the regression tests.
Reviewed-by: Noah Misch
Discussion: https://postgr.es/m/YoHhWD5vQzb2mmiF@paquier.xyz
This new-in-v15 test case assumed it could set max_stack_depth as high
as 2MB. You might think that'd be true on any modern platform but
you'd be wrong, as I found out while experimenting with NetBSD/hppa.
This test is about privileges not platform capabilities, so there seems
no need to use any value greater than the 100kB setting already used
in a couple of places in the core regression tests. There's certainly
no call to expect people to raise their platform's default ulimit just
to run this test.
When an implicit operator family is created, it wasn't getting reported.
Make it do so.
This has always been missing. Backpatch to 10.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Reported-by: Leslie LEMAIRE <leslie.lemaire@developpement-durable.gouv.fr>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Michael Paquiër <michael@paquier.xyz>
Discussion: https://postgr.es/m/f74d69e151b22171e8829551b1159e77@developpement-durable.gouv.fr
_pg_version (version number based on PostgreSQL::Version) is a field
private to Cluster.pm but there was no helper routine to retrieve it
from a Cluster's node. The same is done for install_path, for example,
and the version object becomes handy when writing tests that need
version-specific handling.
Reviewed-by: Andrew Dunstan, Daniel Gustafsson
Discussion: https://postgr.es/m/YoWfoJTc987tsxpV@paquier.xyz
I rephrased the error messages to be more in the style of
option_parse_int(), and also made use of the new "detail" message
facility. I didn't actually use option_parse_int() (which could be
used for -n) because libpgfeutils wasn't used here yet and I wanted to
keep this just to string changes. But it could be done in the future.
This is a slight, convenient semantics change from what commit
0f0cfb4940 ("Fix parallel operations that prevent oldest xmin from
advancing") introduced that lets us simplify the coding in the one place
where it is used.
Backpatch to 13. This is related to commit 6fea65508a ("Tighten
ComputeXidHorizons' handling of walsenders") rewriting the code site
where this is used, which has not yet been backpatched, but it may well
be in the future.
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/202204191637.eldwa2exvguw@alvherre.pgsql
Commit 923def9a53 and 52e4f0cd47 allowed to specify column lists and row
filters for publication tables. This commit extends the
pg_publication_tables view and pg_get_publication_tables function to
display that information.
This information will be useful to users and we also need this for the
later commit that prohibits combining multiple publications with different
column lists for the same table.
Author: Hou Zhijie
Reviewed By: Amit Kapila, Alvaro Herrera, Shi Yu, Takamichi Osumi
Discussion: https://postgr.es/m/202204251548.mudq7jbqnh7r@alvherre.pgsql
We weren't checking the length of the column list in the alias clause of
an XMLTABLE or JSON_TABLE function (a "tablefunc" RTE), and it was
possible to make the server crash by passing an overly long one. Fix it
by throwing an error in that case, like the other places that deal with
alias lists.
In passing, modify the equivalent test used for join RTEs to look like
the other ones, which was different for no apparent reason.
This bug came in when XMLTABLE was born in version 10; backpatch to all
stable versions.
Reported-by: Wang Ke <krking@zju.edu.cn>
Discussion: https://postgr.es/m/17480-1c9d73565bb28e90@postgresql.org
We can use a single line to print all tuple counts that MERGE processed,
for conciseness, and elide those that are zeroes. Non-text formats
report all numbers, as is typical.
Per comment from Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20220511163350.GL19626@telsasoft.com
In order to estimate the cache hit ratio of a Memoize node, one of the
inputs we require is the estimated number of times the Memoize node will
be rescanned. The higher this number, the large the cache hit ratio is
likely to become. Unfortunately, the value being passed as the number of
"calls" to the Memoize was incorrectly using the Nested Loop's
outer_path->parent->rows instead of outer_path->rows. This failed to
account for the fact that the outer_path might be parameterized by some
upper-level Nested Loop.
This problem could lead to Memoize plans appearing more favorable than
they might actually be. It could also lead to extended executor startup
times when work_mem values were large due to the planner setting overly
large MemoizePath->est_entries resulting in the Memoize hash table being
initially made much larger than might be required.
Fix this simply by passing outer_path->rows rather than
outer_path->parent->rows. Also, adjust the expected regression test
output for a plan change.
Reported-by: Pavel Stehule
Author: David Rowley
Discussion: https://postgr.es/m/CAFj8pRAMp%3DQsMi6sPQJ4W3hczoFJRvyXHJV3AZAZaMyTVM312Q%40mail.gmail.com
Backpatch-through: 14, where Memoize was introduced
Per BF animal chipmunk: CREATE DATABASE could apparently fail due to an
AV process being in the template database and not quitting fast enough
for the 5 second timeout in CountOtherDBBackends(). The test script had
autovacuum_naptime=1s to encourage more activity and opening of fds, but
that wasn't strictly necessary for this test. Take it out.
Per BF animal skink: there was a 300s timeout for all tests in the
script, but apparently that was not enough under valgrind. Let's use
the standard timeout $PostgreSQL::Test::Utils::timeout_default, but
restart it for each query we run.
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGKa8HNJaA24gqiiFoGy0ysndeVoJsHvX_q1-DVLFaGAmw%40mail.gmail.com
I started out with the intention to rename value_type to item_type to
avoid a collision with a typedef name that appears on some platforms.
Along the way, I noticed that the adjacent field "format" was not being
correctly handled by the backend/nodes/ infrastructure functions:
copyfuncs.c erroneously treated it as a scalar, while equalfuncs,
outfuncs, and readfuncs omitted handling it at all. This looks like
it might be cosmetic at the moment because the field is always NULL
after parse analysis; but that's likely a bug in itself, and the code's
certainly not very future-proof. Let's fix it while we can still do so
without forcing an initdb on beta testers.
Further study found a few other inconsistencies in the backend/nodes/
infrastructure for the recently-added JSON node types, so fix those too.
catversion bumped because of potential change in stored rules.
Discussion: https://postgr.es/m/526703.1652385613@sss.pgh.pa.us
Currently, preloaded libraries are expected to request additional
shared memory and LWLocks in _PG_init(). However, it is not unusal
for such requests to depend on MaxBackends, which won't be
initialized at that time. Such requests could also depend on GUCs
that other modules might change. This introduces a new hook where
modules can safely use MaxBackends and GUCs to request additional
shared memory and LWLocks.
Furthermore, this change restricts requests for shared memory and
LWLocks to this hook. Previously, libraries could make requests
until the size of the main shared memory segment was calculated.
Unlike before, we no longer silently ignore requests received at
invalid times. Instead, we FATAL if someone tries to request
additional shared memory or LWLocks outside of the hook.
Nathan Bossart and Julien Rouhaud
Discussion: https://postgr.es/m/20220412210112.GA2065815%40nathanxps13
Discussion: https://postgr.es/m/Yn2jE/lmDhKtkUdr@paquier.xyz
pgstat_report_stat() is only supposed to be called outside of transactions. In
5891c7a8ed I added a pgstat_report_stat() call into LogicalRepApplyLoop()'s
timeout branch. While not commonly reached inside a transaction, it is
reachable (e.g. due to network bottlenecks or the sender being stalled / slow
for some reason).
To fix, add a !IsTransactionState() check.
No test added because there's no easy way to reproduce this case without
patching the code.
Reported-By: Erik Rijkers <er@xs4all.nl>
Discussion: https://postgr.es/m/b3463b8c-2328-dcac-0136-af95715493c1@xs4all.nl
A few places used binary_upgrade_* variables without including the header,
which worked without warnings because the variables are defined in those
places. However that can cause linker complaints with MSVC - except that we
don't see them right now, due to the use of a symbol export file.
Discussion: https://postgr.es/m/20220512164513.vaheofqp2q24l65r@alap3.anarazel.de
Run renumber_oids.pl to move high-numbered OIDs down, as per pre-beta
tasks specified by RELEASE_CHANGES. For reference, the command was
./renumber_oids.pl --first-mapped-oid 8000 --target-oid 6205
The original coding did this in pqDropServerData(), which seems
fairly backwards. Pending commands are more like already-queued
output data, which is dropped in pqDropConnection(). Moving the
operation means that we clear the command queue immediately upon
detecting connection drop, which improves the sanity of subsequent
behavior. In particular this eliminates duplicated error message
text as a consequence of code added in b15f25446, which supposed
that a nonempty command queue must mean the prior operation is
still active.
There might be an argument for backpatching this to v14; but as with
b15f25446, I'm unsure about interactions with 618c16707. For now,
given the lack of complaints about v14's behavior, leave it alone.
Per report from Peter Eisentraut.
Discussion: https://postgr.es/m/de57761c-b99b-3435-b0a6-474c72b1149a@enterprisedb.com
This follows in the footsteps of commit 2591ee8ec by removing one more
ill-advised shortcut from planning of GroupingFuncs. It's true that
we don't intend to execute the argument expression(s) at runtime, but
we still have to process any Vars appearing within them, or we risk
failure at setrefs.c time (or more fundamentally, in EXPLAIN trying
to print such an expression). Vars in upper plan nodes have to have
referents in the next plan level, whether we ever execute 'em or not.
Per bug #17479 from Michael J. Sullivan. Back-patch to all supported
branches.
Richard Guo
Discussion: https://postgr.es/m/17479-6260deceaf0ad304@postgresql.org
Three variables in pqsignal.h (UnBlockSig, BlockSig and StartupBlockSig)
were not marked with PGDLLIMPORT, as they are declared in a way that
prevents mark_pgdllimport.pl to detect them. These variables are
redefined in a style more consistent with the other headers, allowing
the script to find and mark them.
PGDLLIMPORT was missing for __pg_log_level in logging.h, so add it
back. The marking got accidentally removed in 9a374b77, just after its
addition in 8ec5694.
While on it, add a comment in mark_pgdllimport.pl explaining what are
the arguments needed by the script (aka a list of header paths).
Reported-by: Andres Freund
Discussion: https://postgr.es/m/20220506234924.6mxxotl3xl63db3l@alap3.anarazel.de