The Truncate operation acquires an exclusive lock on the target relation
and indexes. It then waits for logical replication of the operation to
finish at commit. Now because we are acquiring the shared lock on the
target index to get index attributes in pgoutput while sending the
changes for the Truncate operation, it leads to a deadlock.
Actually, we don't need to acquire a lock on the target index as we build
the cache entry using a historic snapshot and all the later changes are
absorbed while decoding WAL. So, we wrote a special purpose function for
logical replication to get a bitmap of replica identity attribute numbers
where we get that information without locking the target index.
We decided not to backpatch this as there doesn't seem to be any field
complaint about this issue since it was introduced in commit 5dfd1e5a in
v11.
Reported-by: Haiying Tang
Author: Takamichi Osumi, test case by Li Japin
Reviewed-by: Amit Kapila, Ajin Cherian
Discussion: https://postgr.es/m/OS0PR01MB6113C2499C7DC70EE55ADB82FB759@OS0PR01MB6113.jpnprd01.prod.outlook.com
Commit 2ec993a7c, which added triggers on views, modified the rewriter
to add dummy entries like "SET x = x" for all columns that weren't
actually being updated by the user in any UPDATE directed at a view.
That was needed at the time to produce a complete "NEW" row to pass
to the trigger. Later it was found to cause problems for ordinary
updatable views, so commit cab5dc5da restricted it to happen only for
trigger-updatable views. But in the wake of commit 86dc90056, we
really don't need it at all. nodeModifyTable.c populates the trigger
"OLD" row from the whole-row variable that is generated for the view,
and then it computes the "NEW" row using that old row and the UPDATE
targetlist. So there is no need for the UPDATE tlist to have dummy
entries, any more than it needs them for regular tables or other
types of views.
(The comments for rewriteTargetListIU suggest that we must do this
for correct expansion of NEW references in rules, but I now think
that that was just lazy comment editing in 2ec993a7c. If we didn't
need it for rules on views before there were triggers, we don't need
it after that.)
This essentially propagates 86dc90056's decision that we don't need
dummy column updates into the view case. Aside from making the
different cases more uniform and hence possibly forestalling future
bugs, it ought to save a little bit of rewriter/planner effort.
Discussion: https://postgr.es/m/2181213.1619397634@sss.pgh.pa.us
We send the prepare for the concurrently aborted xacts so that later when
rollback prepared is decoded and sent, the downstream should be able to
rollback such a xact. For 'streaming' case (when we send changes for
in-progress transactions), we were sending prepare twice when concurrent
abort was detected.
Author: Peter Smith
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/f82133c6-6055-b400-7922-97dae9f2b50b@enterprisedb.com
This function's behavior for UPDATE on a trigger-updatable view was
justified by analogy to what preptlist.c used to do for UPDATE on
regular tables. Since preptlist.c hasn't done that since 86dc90056,
that argument is no longer sensible, let alone convincing. I think
we do still need it to act that way, so update the comment to explain
why.
6f6f284 has introduced a specific macro to make printf()-ing of LSNs
easier. This takes care of what looks like the remaining code paths
that did not get the call.
Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi, Tom Lane
Discussion: https://postgr.es/m/YIJS9x6K8ruizN7j@paquier.xyz
Instead, put them in via a format placeholder. This reduces the
number of distinct translatable messages and also reduces the chances
of typos during translation. We already did this for the system call
arguments in a number of cases, so this is just the same thing taken a
bit further.
Discussion: https://www.postgresql.org/message-id/flat/92d6f545-5102-65d8-3c87-489f71ea0a37%40enterprisedb.com
These functions shouldn't receive null arguments: multirange_constructor0()
doesn't have any arguments while multirange_constructor2() has a single array
argument, which is never null.
But mark them strict anyway for the sake of uniformity.
Also, make checks for null arguments use elog() instead of ereport() as these
errors should normally be never thrown. And adjust corresponding comments.
Catversion is bumped.
Reported-by: Peter Eisentraut
Discussion: https://postgr.es/m/0f783a96-8d67-9e71-996b-f34a7352eeef%40enterprisedb.com
Commit bbe0a81db6 introduced "INCLUDING COMPRESSION" option
in CREATE TABLE command, but previously TableLikeOption in gram.y and
parsenodes.h didn't classify this new option in alphabetical order
with the rest.
Author: Fujii Masao
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/YHerAixOhfR1ryXa@paquier.xyz
This is cleanup for commit 27e1f1456:
* ExecAppendAsyncEventWait(), which was modified a bit further by commit
a8af856d3, duplicated the same nevents calculation. Simplify the code
a little bit to avoid the duplication. Update comments there.
* Add an assertion to ExecAppendAsyncRequest().
* Update a comment about merging the async_capable options from input
relations in merge_fdw_options(), per complaint from Kyotaro Horiguchi.
* Add a comment for fetch_more_data_begin().
Author: Etsuro Fujita
Discussion: https://postgr.es/m/CAPmGK1637W30Wx3MnrReewhafn6F_0J76mrJGoFXFnpPq4QfvA%40mail.gmail.com
Adopt a more consistent policy about what slot-type-specific
getsysattr functions should do when system attributes are not
available. To wit, they should all throw the same user-oriented
error, rather than variously crashing or emitting developer-oriented
messages.
This closes a identifiable problem in commits a71cfc56b and
3fb93103a (in v13 and v12), so back-patch into those branches,
along with a test case to try to ensure we don't break it again.
It is not known that any of the former crash cases are reachable
in HEAD, but this seems like a good safety improvement in any case.
Discussion: https://postgr.es/m/141051591267657@mail.yandex.ru
Have interested callers of find_inheritance_children set the
detached_exist value to false prior to calling it, so that that routine
only has to set it true in the rare cases where it is necessary. Don't
touch it otherwise.
Per buildfarm member thorntail (which reported a UBSan failure here).
During queries coming from ri_triggers.c, we need to omit partitions
that are marked pending detach -- otherwise, the RI query is tricked
into allowing a row into the referencing table whose corresponding row
is in the detached partition. Which is bogus: once the detach operation
completes, the row becomes an orphan.
However, the code was not doing that in repeatable-read transactions,
because relcache kept a copy of the partition descriptor that included
the partition, and used it in the RI query. This commit changes the
partdesc cache code to only keep descriptors that aren't dependent on
a snapshot (namely: those where no detached partition exist, and those
where detached partitions are included). When a partdesc-without-
detached-partitions is requested, we create one afresh each time; also,
those partdescs are stored in PortalContext instead of
CacheMemoryContext.
find_inheritance_children gets a new output *detached_exist boolean,
which indicates whether any partition marked pending-detach is found.
Its "include_detached" input flag is changed to "omit_detached", because
that name captures desired the semantics more naturally.
CreatePartitionDirectory() and RelationGetPartitionDesc() arguments are
identically renamed.
This was noticed because a buildfarm member that runs with relcache
clobbering, which would not keep the improperly cached partdesc, broke
one test, which led us to realize that the expected output of that test
was bogus. This commit also corrects that expected output.
Author: Amit Langote <amitlangote09@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/3269784.1617215412@sss.pgh.pa.us
Creating a trigger on a relation to which an apply operation is
triggered would cause a relation leak once the change gets committed,
as the executor would miss that the relation needs to be closed
beforehand. This issue got introduced with the refactoring done in
1375422c, where it becomes necessary to track relations within
es_opened_result_relations to make sure that they are closed.
We have discussed using ExecInitResultRelation() coupled with
ExecCloseResultRelations() for the relations in need of tracking by the
apply operations in the subscribers, which would simplify greatly the
opening and closing of indexes, but this requires a larger rework and
reorganization of the worker code, particularly for the tuple routing
part. And that's not really welcome post feature freeze. So, for now,
settle down to the same solution as TRUNCATE which is to fill in
es_opened_result_relations with the relation opened, to make sure that
ExecGetTriggerResultRel() finds them and that they get closed.
The code is lightly refactored so as a relation is not registered three
times for each DML code path, making the whole a bit easier to follow.
Reported-by: Tang Haiying, Shi Yu, Hou Zhijie
Author: Amit Langote, Masahiko Sawada, Hou Zhijie
Reviewed-by: Amit Kapila, Michael Paquier
Discussion: https://postgr.es/m/OS0PR01MB611383FA0FE92EB9DE21946AFB769@OS0PR01MB6113.jpnprd01.prod.outlook.com
On ALTER TABLE .. DETACH CONCURRENTLY, we add a new table constraint
that duplicates the partition constraint. But if the partition already
has another constraint that implies that one, then that's unnecessary.
We were already avoiding the addition of a duplicate constraint if there
was an exact 'equal' match -- this just improves the quality of the check.
Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20210410184226.GY6592@telsasoft.com
Since pg_depend can contain duplicate entries, we need to eliminate
those in information schema views that build on pg_depend, using
DISTINCT. Some of the older views already did that correctly, but
some of the more recently added ones didn't. (In some of these views,
it might not be possible to reproduce the issue because of how the
implementation happens to deduplicate dependencies while recording
them, but it seems better to keep this consistent in all cases.)
While tracking down the bug fixed in the preceding commit, I got quite
annoyed by the low quality of spg_desc's output. Add missing fields,
try to make the formatting consistent.
log_statement is issued before query_id can be computed, so properly
clear the value, and document the interaction.
Reported-by: Fujii Masao, Michael Paquier
Discussion: https://postgr.es/m/YHPkU8hFi4no4NSw@paquier.xyz
Author: Julien Rouhaud
Previously, it was pg_stat_activity.queryid to match the
pg_stat_statements queryid column. This is an adjustment to patch
4f0b0966c8. This also adjusts some of the internal function calls to
match. Catversion bumped.
Reported-by: Álvaro Herrera, Julien Rouhaud
Discussion: https://postgr.es/m/20210408032704.GA7498@alvherre.pgsql
I didn't particularly like this function name, as it fails to
express what's going on. Also, returning the sort expression
alone isn't too helpful --- typically, a caller would also
need some other fields of the EquivalenceMember. But the
sole caller really only needs a bool result, so let's make
it "bool relation_can_be_sorted_early()".
Discussion: https://postgr.es/m/91f3ec99-85a4-fa55-ea74-33f85a5c651f@swarm64.com
An oversight introduced by the incremental-sort patches caused
"could not find pathkey item to sort" errors in some situations
where a sort key involves an aggregate or window function.
The basic problem here is that find_em_expr_usable_for_sorting_rel
isn't properly modeling what prepare_sort_from_pathkeys will do
later. Rather than hoping we can keep those functions in sync,
let's refactor so that they actually share the code for
identifying a suitable sort expression.
With this refactoring, tlist.c's tlist_member_ignore_relabel
is unused. I removed it in HEAD but left it in place in v13,
in case any extensions are using it.
Per report from Luc Vlaming. Back-patch to v13 where the
problem arose.
James Coleman and Tom Lane
Discussion: https://postgr.es/m/91f3ec99-85a4-fa55-ea74-33f85a5c651f@swarm64.com
Document VACUUM's soft assumption that any LP_DEAD items encountered
during pruning will become LP_UNUSED items before VACUUM finishes up.
This is integral to the accounting used by VACUUM to generate its final
report on the table to the stats collector. It also affects how VACUUM
determines which heap pages are truncatable. In both cases VACUUM is
concerned with the likely contents of the page in the near future, not
the current contents of the page.
This state of affairs created the false impression that VACUUM's dead
tuple accounting had significant difference with similar accounting used
during ANALYZE. There were and are no substantive differences, at least
when the soft assumption completely works out. This is far clearer now.
Also document cases where things don't quite work out for VACUUM's dead
tuple accounting. It's possible that a significant number of LP_DEAD
items will be left behind by VACUUM, and won't be recorded as remaining
dead tuples in VACUUM's statistics collector report. This behavior
dates back to commit a96c41fe, which taught VACUUM to run without index
and heap vacuuming at the user's request. The failsafe mechanism added
to VACUUM more recently by commit 1e55e7d1 takes the same approach to
dead tuple accounting.
Reported-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAH2-Wz=Jmtu18PrsYq3EvvZJGOmZqSO2u3bvKpx9xJa5uhNp=Q@mail.gmail.com
Add a few words of comment to explain why SIGURG doesn't follow the
dummy_handler pattern used for SIGUSR2, since that might otherwise
appear to be a bug.
Discussion: https://postgr.es/m/4006115.1618577212%40sss.pgh.pa.us
As it stands, find_expr_references_walker() pays attention to leaf-node
collation fields while ignoring the input collations of actual function
and operator nodes. That seems exactly backwards from a semantic
standpoint, and it leads to reporting dependencies on collations that
really have nothing to do with the expression's behavior.
Hence, rewrite to look at function input collations instead. This
isn't completely perfect either; it fails to account for the behavior
of record_eq and its siblings. (The previous coding at least gave an
approximation of that, though I think it could be fooled pretty easily
into considering the columns of irrelevant composite types.) We may
be able to improve on this later, but for now this should satisfy the
buildfarm members that didn't like ef387bed8.
In passing fix some oversights in GetTypeCollations(), and get
rid of its duplicative de-duplications. (I'm worried that it's
still potentially O(N^2) or worse, but this makes it a little
better.)
Discussion: https://postgr.es/m/3564817.1618420687@sss.pgh.pa.us
Adopt the new pre-parsed representation for all built-in and
information_schema SQL-language functions, except for a small
number that can't presently be converted because they have
polymorphic arguments.
This eliminates residual hazards around search-path safety of
these functions, and might provide some small performance benefits
by reducing parsing costs. It seems useful also to provide more
test coverage for the SQL-standard-body feature.
Discussion: https://postgr.es/m/3956760.1618529139@sss.pgh.pa.us
Invent system_functions.sql to carry the function definitions that
were formerly in system_views.sql. The function definitions were
already a quarter of the file and are about to be more, so it seems
appropriate to give them their own home.
In passing, fix an oversight in dfb75e478: it neglected to call
check_input() for system_constraints.sql.
Discussion: https://postgr.es/m/3956760.1618529139@sss.pgh.pa.us
recordMultipleDependencies had the wrong scope for its "version"
variable, allowing a version label to leak from the collation entry it
was meant for to subsequent non-collation entries. This is relatively
hard to trigger because of the OID-descending order that the inputs
will normally arrive in: subsequent non-collation items will tend to
be pinned. But it can be exhibited easily with a custom collation.
Also, don't special-case the default collation, but instead ignore
pinned-ness of a collation when we've found a version for it. This
avoids creating useless pg_depend entries, and removes a not-very-
future-proof assumption that C, POSIX, and DEFAULT are the only
pinned collations.
A small problem is that, because the default collation may or may
not have a version, the regression tests can't assume anything about
whether dependency entries will be made for it. This seems OK though
since it's now handled just the same as other collations, and we have
test cases for both versioned and unversioned collations.
Fixes oversights in commit 257836a75. Thanks to Julien Rouhaud
for review.
Discussion: https://postgr.es/m/3564817.1618420687@sss.pgh.pa.us
This is only a latent bug, since these calls are only reached for
non-text output formats, and currently none of those will print
the units. Still, we should get it right in case that ever changes.
Justin Pryzby
Discussion: https://postgr.es/m/20210415163846.GA3315@telsasoft.com
This adds the statistics about total transactions count and total
transaction data logically sent to the decoding output plugin from
ReorderBuffer. Users can query the pg_stat_replication_slots view to check
these stats.
Suggested-by: Andres Freund
Author: Vignesh C and Amit Kapila
Reviewed-by: Sawada Masahiko, Amit Kapila
Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
Commit e717a9a18 changed the longstanding rule that prosrc is NOT NULL
because when a SQL-language function is written in SQL-standard style,
we don't currently have anything useful to put there. This seems a poor
decision though, as it could easily have negative impacts on external
PLs (opening them to crashes they didn't use to have, for instance).
SQL-function-related code can just as easily test "is prosqlbody not
null" as "is prosrc null", so there's no real gain there either.
Hence, revert the NOT NULL marking removal and adjust related logic.
For now, we just put an empty string into prosrc for SQL-standard
functions. Maybe we'll have a better idea later, although the
history of things like pg_attrdef.adsrc suggests that it's not
easy to maintain a string equivalent of a node tree.
This also adds an assertion that queryDesc->sourceText != NULL
to standard_ExecutorStart. We'd been silently relying on that
for awhile, so let's make it less silent.
Also fix some overlooked documentation and test cases.
Discussion: https://postgr.es/m/2197698.1617984583@sss.pgh.pa.us
This GUC has always been classified as a planner option since its
introduction in 7c944bd, and was listed in postgresql.conf.sample. As
this parameter exists for testing purposes, move it to the section
dedicated to developer parameters and hence remove it from
postgresql.conf.sample. This will avoid any temptation to play with it
on production servers for users that should never really have to touch
this parameter.
The general description used for developer options is reworded a bit, to
take into account the inclusion of force_parallel_mode, per a suggestion
from Tom Lane.
Per discussion between Tom Lane, Bruce Momjian, Justin Pryzby, Bharath
Rupireddy and me.
Author: Justin Pryzby, Tom Lane
Discussion: https://postgr.es/m/20210403152402.GA8049@momjian.us
This will make it consistent with the other usage of slotname in the code.
In the passing, change pgstat_report_replslot signature to use a structure
rather than multiple parameters.
Reported-by: Andres Freund
Author: Vignesh C
Reviewed-by: Sawada Masahiko, Amit Kapila
Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
It seems like a good idea to bypass heap truncation when the wraparound
failsafe mechanism (which was added in commit 1e55e7d1) is in effect.
Deliberately don't bypass heap truncation in the INDEX_CLEANUP=off case,
even though it is similar to the failsafe case. There is already a
separate reloption (and related VACUUM parameter) for that.
Reported-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAD21AoDWRh6oTN5T8wa+cpZUVpHXET8BJ8Da7WHVHpwkPP6KLg@mail.gmail.com
Previously you could only use unqualified variable names here.
While that's not a functional deficiency, since only the target
table can be referenced, it's a surprising inconsistency with the
rules for partial-index predicates, on which this syntax is
supposedly modeled.
The fix for that is no harder than passing addToRelNameSpace = true
to addNSItemToQuery. However, it's really pretty bogus for
transformOnConflictArbiter and transformOnConflictClause to be
messing with the namespace item for the target table at all.
It's not theirs to manage, it results in duplicative creations of
namespace items, and transformOnConflictClause wasn't even doing
it quite correctly (that coding resulted in two nsitems for the
target table, since it hadn't cleaned out the existing one).
Hence, make transformInsertStmt responsible for setting up the
target nsitem once for both these clauses and RETURNING.
Also, arrange for ON CONFLICT ... UPDATE's "excluded" pseudo-relation
to be added to the rangetable before we run transformOnConflictArbiter.
This produces a more helpful HINT if someone writes "excluded.col"
in the arbiter expression.
Per bug #16958 from Lukas Eder. Although I agree this is a bug,
the consequences are hardly severe, so no back-patch.
Discussion: https://postgr.es/m/16958-963f638020de271c@postgresql.org
Most GUC check hooks that inspect database state have special checks
that prevent them from throwing hard errors for state-dependent issues
when source == PGC_S_TEST. This allows, for example,
"ALTER DATABASE d SET default_text_search_config = foo" when the "foo"
configuration hasn't been created yet. Without this, we have problems
during dump/reload or pg_upgrade, because pg_dump has no idea about
possible dependencies of GUC values and can't ensure a safe restore
ordering.
However, check_role() and check_session_authorization() hadn't gotten
the memo about that, and would throw hard errors anyway. It's not
entirely clear what is the use-case for "ALTER ROLE x SET role = y",
but we've now heard two independent complaints about that bollixing
an upgrade, so apparently some people are doing it.
Hence, fix these two functions to act more like other check hooks
with similar needs. (But I did not change their insistence on
being inside a transaction, as it's still not apparent that setting
either GUC from the configuration file would be wise.)
Also fix check_temp_buffers, which had a different form of the disease
of making state-dependent checks without any exception for PGC_S_TEST.
A cursory survey of other GUC check hooks did not find any more issues
of this ilk. (There are a lot of interdependencies among
PGC_POSTMASTER and PGC_SIGHUP GUCs, which may be a bad idea, but
they're not relevant to the immediate concern because they can't be
set via ALTER ROLE/DATABASE.)
Per reports from Charlie Hornsby and Nathan Bossart. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/HE1P189MB0523B31598B0C772C908088DB7709@HE1P189MB0523.EURP189.PROD.OUTLOOK.COM
Discussion: https://postgr.es/m/20160711223641.1426.86096@wrigleys.postgresql.org
Previously, get_cached_rowtype() cached a pointer to a reference-counted
tuple descriptor from the typcache, relying on the ExprContextCallback
mechanism to release the tupdesc refcount when the expression tree
using the tupdesc was destroyed. This worked fine when it was designed,
but the introduction of within-DO-block COMMITs broke it. The refcount
is logged in a transaction-lifespan resource owner, but plpgsql won't
destroy simple expressions made within the DO block (before its first
commit) until the DO block is exited. That results in a warning about
a leaked tupdesc refcount when the COMMIT destroys the original resource
owner, and then an error about the active resource owner not holding a
matching refcount when the expression is destroyed.
To fix, get rid of the need to have a shutdown callback at all, by
instead caching a pointer to the relevant typcache entry. Those
survive for the life of the backend, so we needn't worry about the
pointer becoming stale. (For registered RECORD types, we can still
cache a pointer to the tupdesc, knowing that it won't change for the
life of the backend.) This mechanism has been in use in plpgsql
and expandedrecord.c since commit 4b93f5799, and seems to work well.
This change requires modifying the ExprEvalStep structs used by the
relevant expression step types, which is slightly worrisome for
back-patching. However, there seems no good reason for extensions
to be familiar with the details of these particular sub-structs.
Per report from Rohit Bhogate. Back-patch to v11 where within-DO-block
COMMITs became a thing.
Discussion: https://postgr.es/m/CAAV6ZkQRCVBh8qAY+SZiHnz+U+FqAGBBDaDTjF2yiKa2nJSLKg@mail.gmail.com
heap_update needs to clear any existing "all visible" flag on
the old tuple's page (and on the new page too, if different).
Per coding rules, to do this it must acquire pin on the appropriate
visibility-map page while not holding exclusive buffer lock;
which creates a race condition since someone else could set the
flag whenever we're not holding the buffer lock. The code is
supposed to handle that by re-checking the flag after acquiring
buffer lock and retrying if it became set. However, one code
path through heap_update itself, as well as one in its subroutine
RelationGetBufferForTuple, failed to do this. The end result,
in the unlikely event that a concurrent VACUUM did set the flag
while we're transiently not holding lock, is a non-recurring
"PANIC: wrong buffer passed to visibilitymap_clear" failure.
This has been seen a few times in the buildfarm since recent VACUUM
changes that added code paths that could set the all-visible flag
while holding only exclusive buffer lock. Previously, the flag
was (usually?) set only after doing LockBufferForCleanup, which
would insist on buffer pin count zero, thus preventing the flag
from becoming set partway through heap_update. However, it's
clear that it's heap_update not VACUUM that's at fault here.
What's less clear is whether there is any hazard from these bugs
in released branches. heap_update is certainly violating API
expectations, but if there is no code path that can set all-visible
without a cleanup lock then it's only a latent bug. That's not
100% certain though, besides which we should worry about extensions
or future back-patch fixes that could introduce such code paths.
I chose to back-patch to v12. Fixing RelationGetBufferForTuple
before that would require also back-patching portions of older
fixes (notably 0d1fe9f74), which is more code churn than seems
prudent to fix a hypothetical issue.
Discussion: https://postgr.es/m/2247102.1618008027@sss.pgh.pa.us
With the Oracle Developer Studio 12.6 compiler, #line directives alter
the current source file location for purposes of #include "..."
directives. Hence, a VPATH build failed with 'cannot find include file:
"specscanner.c"'. With two exceptions, parser-containing directories
already add "-I. -I$(srcdir)"; eliminate the exceptions. Back-patch to
9.6 (all supported versions).
Commit 6f38d4dac3 failed to heed a warning about the stability of the
value pointed to by "otid". The caller is allowed to pass in a pointer to
newtup->t_self, which will be updated during the execution of the
function. Instead, the SSI check should use the value we copy into
oldtup.t_self near the top of the function.
Not a live bug, because newtup->t_self doesn't really get updated until
a bit later, but it was confusing and broke the rule established by the
comment.
Back-patch to 13.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2689164.1618160085%40sss.pgh.pa.us
There are hacks in parse_coerce.c to push down a requested coercion
to below any CollateExpr that may appear. However, we did that even
if the requested data type is non-collatable, leading to an invalid
expression tree in which CollateExpr is applied to a non-collatable
type. The fix is just to drop the CollateExpr altogether, reasoning
that it's useless.
This bug is ten years old, dating to the original addition of
COLLATE support. The lack of field complaints suggests that there
aren't a lot of user-visible consequences. We noticed the problem
because it would trigger an assertion in DefineVirtualRelation if
the invalid structure appears as an output column of a view; however,
in a non-assert build, you don't see a crash just a (subtly incorrect)
complaint about applying collation to a non-collatable type. I found
that by putting the incorrect structure further down in a view, I could
make a view definition that would fail dump/reload, per the added
regression test case. But CollateExpr doesn't do anything at run-time,
so this likely doesn't lead to any really exciting consequences.
Per report from Yulin Pei. Back-patch to all supported branches.
Discussion: https://postgr.es/m/HK0PR01MB22744393C474D503E16C8509F4709@HK0PR01MB2274.apcprd01.prod.exchangelabs.com
This GUC has already been classified as LOGGING_WHAT, but its location
in postgresql.conf.sample and the documentation did not reflect that, so
fix those inconsistencies.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20210404012546.GK6592@telsasoft.com
Using Roman numbers (via "RM" or "rm") for a conversion to calculate a
number of months has never considered the case of negative numbers,
where a conversion could easily cause out-of-bound memory accesses. The
conversions in themselves were not completely consistent either, as
specifying 12 would result in NULL, but it should mean XII.
This commit reworks the conversion calculation to have a more
consistent behavior:
- If the number of months and years is 0, return NULL.
- If the number of months is positive, return the exact month number.
- If the number of months is negative, do a backward calculation, with
-1 meaning December, -2 November, etc.
Reported-by: Theodor Arsenij Larionov-Trichkin
Author: Julien Rouhaud
Discussion: https://postgr.es/m/16953-f255a18f8c51f1d5@postgresql.org
backpatch-through: 9.6
Coverity complained about possible overflow in expressions like
intresult = tm->tm_sec * 1000000 + fsec;
on the grounds that the multiplication would happen in 32-bit
arithmetic before widening to the int64 result. I think these
are all false positives because of the limited possible range of
tm_sec; but nonetheless it seems silly to spell it like that when
nearby lines have the identical computation written with a 64-bit
constant.
... or more accurately, with an LL constant, which is not project
style. Make all of these use INT64CONST(), as we do elsewhere.
This is all new code from a2da77cdb, so no need for back-patch.
The path for *exprs != NIL would misbehave, and likely crash,
since pull_varattnos expects its last argument to be valid
at call.
Found by Coverity --- we have no coverage of this path in
the regression tests.
ExecuteTruncate() filters out the duplicate tables specified
in the TRUNCATE command, for example in the case where "TRUNCATE foo, foo"
is executed. Such duplicate tables obviously don't need to be opened
and closed because they are skipped. But previously it always opened
the tables before checking whether they were duplicated ones or not,
and then closed them if they were. That is, the duplicated tables were
opened and closed unnecessarily.
This commit changes ExecuteTruncate() so that it opens the table
after it confirms that table is not duplicated one, which leads to
avoid unnecessary table open/close.
Do not back-patch because such unnecessary table open/close is not
a bug though it exists in older versions.
Author: Bharath Rupireddy
Reviewed-by: Amul Sul, Fujii Masao
Discussion: https://postgr.es/m/CALj2ACUdBO_sXJTa08OZ0YT0qk7F_gAmRa9hT4dxRcgPS4nsZA@mail.gmail.com
Commit 438fc4a39c prevented the WAL replay from writing
COMMIT_TS_SETTS record. By this change there is no code that
generates COMMIT_TS_SETTS record in PostgreSQL core.
Also we can think that there are no extensions using the record
because we've not received so far any complaints about the issue
that commit 438fc4a39c fixed. Therefore this commit removes
COMMIT_TS_SETTS record and its related code. Even without
this record, the timestamp required for commit timestamp feature
can be acquired from the COMMIT record.
Bump WAL page magic.
Reported-by: lx zou <zoulx1982@163.com>
Author: Fujii Masao
Reviewed-by: Alvaro Herrera
Discussion: https://postgr.es/m/16931-620d0f2fdc6108f1@postgresql.org
Commit c9c41c7a33 used two different
naming patterns. Standardize on the majority pattern, which was the
only pattern in the last reviewed version of that commit.
There were some comments in nodeFuncs.c that, depending on your
interpretation of the word "result", could lead you to believe that the
comments were badly copied and pasted from somewhere else. If you thought
of "result" as the return value of the function that the comment is
written in, then you'd be misled. However, if you'd correctly
interpreted "result" to mean the result type of the given node type,
you'd not have seen any issues.
Here we do a small cleanup to try to prevent any future
misinterpretations. Per wording suggestion from Tom Lane.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvp+Bw=2Qiu5=uXMKfC7gd0+B=4JvexVgGJU=am2g9a1CA@mail.gmail.com
When commit 0827e8af70 added auto-analyze support for partitioned
tables, it included code to obtain reltuples for the partitioned table
as a number of catalog accesses to read pg_class.reltuples for each
partition. That's not only very inefficient, but also problematic
because autovacuum doesn't hold any locks on any of those tables -- and
doesn't want to. Replace that code with a read of pg_class.reltuples
for the partitioned table, and make sure ANALYZE and TRUNCATE properly
maintain that value.
I found no code that would be affected by the change of relpages from
zero to non-zero for partitioned tables, and no other code that should
be maintaining it, but if there is, hopefully it'll be an easy fix.
Per buildfarm.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/1823909.1617862590@sss.pgh.pa.us
Comment fixes are applied on HEAD, and documentation improvements are
applied on back-branches where needed.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20210408164008.GJ6592@telsasoft.com
Backpatch-through: 9.6
Several buildfarm critters that don't usually produce such
warnings are complaining about e717a9a18. I think it's
actually safe, but move initialization to silence the warning.
Ignore parallel workers in pg_stat_statements
Oversight in 4f0b0966c8 which exposed queryid in parallel workers.
Counters are aggregated by the main backend process so parallel workers
would report duplicated activity, and could also report activity for the
wrong entry as they are only aware of the top level queryid.
Fix thinko in pg_stat_get_activity when retrieving the queryid.
Remove unnecessary call to pgstat_report_queryid().
Reported-by: Amit Kapila, Andres Freund, Thomas Munro
Discussion: https://postgr.es/m/20210408051735.lfbdzun5zdlax5gd@alap3.anarazel.dep634GTSOqnDW86Owrn6qDAVosC5dJjXjp7BMfc5Gz1Q@mail.gmail.com
Author: Julien Rouhaud
This commit introduces new foreign data wrapper API for TRUNCATE.
It extends TRUNCATE command so that it accepts foreign tables as
the targets to truncate and invokes that API. Also it extends postgres_fdw
so that it can issue TRUNCATE command to foreign servers, by adding
new routine for that TRUNCATE API.
The information about options specified in TRUNCATE command, e.g.,
ONLY, CACADE, etc is passed to FDW via API. The list of foreign tables to
truncate is also passed to FDW. FDW truncates the foreign data sources
that the passed foreign tables specify, based on those information.
For example, postgres_fdw constructs TRUNCATE command using them
and issues it to the foreign server.
For performance, TRUNCATE command invokes the FDW routine for
TRUNCATE once per foreign server that foreign tables to truncate belong to.
Author: Kazutaka Onishi, Kohei KaiGai, slightly modified by Fujii Masao
Reviewed-by: Bharath Rupireddy, Michael Paquier, Zhihong Yu, Alvaro Herrera, Stephen Frost, Ashutosh Bapat, Amit Langote, Daniel Gustafsson, Ibrar Ahmed, Fujii Masao
Discussion: https://postgr.es/m/CAOP8fzb_gkReLput7OvOK+8NHgw-RKqNv59vem7=524krQTcWA@mail.gmail.com
Discussion: https://postgr.es/m/CAJuF6cMWDDqU-vn_knZgma+2GMaout68YUgn1uyDnexRhqqM5Q@mail.gmail.com
ScalarArrayOpExprs with "useOr=true" and a set of Consts on the righthand
side have traditionally been evaluated by using a linear search over the
array. When these arrays contain large numbers of elements then this
linear search could become a significant part of execution time.
Here we add a new method of evaluating ScalarArrayOpExpr expressions to
allow them to be evaluated by first building a hash table containing each
element, then on subsequent evaluations, we just probe that hash table to
determine if there is a match.
The planner is in charge of determining when this optimization is possible
and it enables it by setting hashfuncid in the ScalarArrayOpExpr. The
executor will only perform the hash table evaluation when the hashfuncid
is set.
This means that not all cases are optimized. For example CHECK constraints
containing an IN clause won't go through the planner, so won't get the
hashfuncid set. We could maybe do something about that at some later
date. The reason we're not doing it now is from fear that we may slow
down cases where the expression is evaluated only once. Those cases can
be common, for example, a single row INSERT to a table with a CHECK
constraint containing an IN clause.
In the planner, we enable this when there are suitable hash functions for
the ScalarArrayOpExpr's operator and only when there is at least
MIN_ARRAY_SIZE_FOR_HASHED_SAOP elements in the array. The threshold is
currently set to 9.
Author: James Coleman, David Rowley
Reviewed-by: David Rowley, Tomas Vondra, Heikki Linnakangas
Discussion: https://postgr.es/m/CAAaqYe8x62+=wn0zvNKCj55tPpg-JBHzhZFFc6ANovdqFw7-dA@mail.gmail.com
Introduce a new GUC recovery_prefetch, disabled by default. When
enabled, look ahead in the WAL and try to initiate asynchronous reading
of referenced data blocks that are not yet cached in our buffer pool.
For now, this is done with posix_fadvise(), which has several caveats.
Better mechanisms will follow in later work on the I/O subsystem.
The GUC maintenance_io_concurrency is used to limit the number of
concurrent I/Os we allow ourselves to initiate, based on pessimistic
heuristics used to infer that I/Os have begun and completed.
The GUC wal_decode_buffer_size is used to limit the maximum distance we
are prepared to read ahead in the WAL to find uncached blocks.
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (parts)
Reviewed-by: Andres Freund <andres@anarazel.de> (parts)
Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (parts)
Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>
Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com>
Tested-by: Dmitry Dolgov <9erthalion6@gmail.com>
Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com>
Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
Teach xlogreader.c to decode its output into a circular buffer, to
support optimizations based on looking ahead.
* XLogReadRecord() works as before, consuming records one by one, and
allowing them to be examined via the traditional XLogRecGetXXX()
macros.
* An alternative new interface XLogNextRecord() is added that returns
pointers to DecodedXLogRecord structs that can be examined directly.
* XLogReadAhead() provides a second cursor that lets you see
further ahead, as long as data is available and there is enough space
in the decoding buffer. This returns DecodedXLogRecord pointers to the
caller, but also adds them to a queue of records that will later be
consumed by XLogNextRecord()/XLogReadRecord().
The buffer's size is controlled with wal_decode_buffer_size. The buffer
could potentially be placed into shared memory, for future projects.
Large records that don't fit in the circular buffer are called
"oversized" and allocated separately with palloc().
Discussion: https://postgr.es/m/CA+hUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq=AovOddfHpA@mail.gmail.com
Previously, the XLogReader module would fetch new input data using a
callback function. Redesign the interface so that it tells the caller
to insert more data with a special return value instead. This API suits
later patches for prefetching, encryption and maybe other future
projects that would otherwise require continually extending the callback
interface.
As incidental cleanup work, move global variables readOff, readLen and
readSegNo inside XlogReaderState.
Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
Author: Heikki Linnakangas <hlinnaka@iki.fi> (parts of earlier version)
Reviewed-by: Antonin Houska <ah@cybertec.at>
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Takashi Menjo <takashi.menjo@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20190418.210257.43726183.horiguchi.kyotaro%40lab.ntt.co.jp
There was some code in gen_prune_steps_from_opexps that needlessly
checked a list was not empty when it clearly had to contain at least one
item. This prompted a further cleanup operation in partprune.c.
Additionally, the previous code could end up adding additional needless
INTERSECT steps. However, those do not appear to be able to cause any
misbehavior.
gen_prune_steps_from_opexps is now no longer in charge of generating
combine pruning steps. Instead, gen_partprune_steps_internal, which
already does some combine step creation has been given the sole
responsibility of generating all combine steps. This means that when
we recursively call gen_partprune_steps_internal, since it always now adds
a combine step when it produces multiple steps, we can just pay attention
to the final step returned.
In passing, do quite a bit of work on the comments to try to more clearly
explain the role of both gen_partprune_steps_internal and
gen_prune_steps_from_opexps. This is fairly complex code so some extra
effort to give any new readers an overview of how things work seems like
a good idea.
Author: Amit Langote
Reported-by: Andy Fan
Reviewed-by: Kyotaro Horiguchi, Andy Fan, Ryan Lambert, David Rowley
Discussion: https://postgr.es/m/CAKU4AWqWoVii+bRTeBQmeVW+PznkdO8DfbwqNsu9Gj4ubt9A6w@mail.gmail.com
This adds a function, pg_wait_for_backend_termination(), and a new
timeout argument to pg_terminate_backend(), which will wait for the
backend to actually terminate (with or without signaling it to do so
depending on which function is called). The default behaviour of
pg_terminate_backend() remains being timeout=0 which does not waiting.
For pg_wait_for_backend_termination() the default wait is 5 seconds.
Author: Bharath Rupireddy
Reviewed-By: Fujii Masao, David Johnston, Muhammad Usama,
Hou Zhijie, Magnus Hagander
Discussion: https://postgr.es/m/CALj2ACUBpunmyhYZw-kXCYs5NM+h6oG_7Df_Tn4mLmmUQifkqA@mail.gmail.com
If you know the ID of a buffer that recently held a block that you would
like to pin, this function can be used check if it's still there. It
can be used to avoid a second lookup in the buffer mapping table after
PrefetchBuffer() reports a cache hit.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA+hUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq=AovOddfHpA@mail.gmail.com
Previously, autovacuum would completely ignore partitioned tables, which
is not good regarding analyze -- failing to analyze those tables means
poor plans may be chosen. Make autovacuum aware of those tables by
propagating "changes since analyze" counts from the leaf partitions up
the partitioning hierarchy.
This also introduces necessary reloptions support for partitioned tables
(autovacuum_enabled, autovacuum_analyze_scale_factor,
autovacuum_analyze_threshold). It's unclear how best to document this
aspect.
Author: Yuzuko Hosoya <yuzukohosoya@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/CAKkQ508_PwVgwJyBY=0Lmkz90j8CmWNPUxgHvCUwGhMrouz6UA@mail.gmail.com
It's far from clear that this is the right approach - but a good
portion of the buildfarm has been red for a few hours, on the last day
of the CF. And this fixes at least the obvious crash. So let's go with
that for now.
Discussion: https://postgr.es/m/20210407225806.majgznh4lk34hjvu%40alap3.anarazel.de
When working on a scan with only a catalog snapshot, we may not have an
ActiveSnapshot set. If we were to come across a detached partition,
that would cause a crash. Fix by only ignoring detached partitions when
there's an active snapshot.
VACUUM has never needed to call ambulkdelete() for each index in cases
where there are precisely zero TIDs in its dead_tuples array by the end
of its first pass over the heap (also its only pass over the heap in
this scenario). Index vacuuming is simply not required when this
happens. Index cleanup will still go ahead, but in practice most calls
to amvacuumcleanup() are usually no-ops when there were zero preceding
ambulkdelete() calls. In short, VACUUM has generally managed to avoid
index scans when there were clearly no index tuples to delete from
indexes. But cases with _close to_ no index tuples to delete were
another matter -- a round of ambulkdelete() calls took place (one per
index), each of which performed a full index scan.
VACUUM now behaves just as if there were zero index tuples to delete in
cases where there are in fact "virtually zero" such tuples. That is, it
can now bypass index vacuuming and heap vacuuming as an optimization
(though not index cleanup). Whether or not VACUUM bypasses indexes is
determined dynamically, based on the just-observed number of heap pages
in the table that have one or more LP_DEAD items (LP_DEAD items in heap
pages have a 1:1 correspondence with index tuples that still need to be
deleted from each index in the worst case).
We only skip index vacuuming when 2% or less of the table's pages have
one or more LP_DEAD items -- bypassing index vacuuming as an
optimization must not noticeably impede setting bits in the visibility
map. As a further condition, the dead_tuples array (i.e. VACUUM's array
of LP_DEAD item TIDs) must not exceed 32MB at the point that the first
pass over the heap finishes, which is also when the decision to bypass
is made. (The VACUUM must also have been able to fit all TIDs in its
maintenance_work_mem-bound dead_tuples space, though with a default
maintenance_work_mem setting it can't matter.)
This avoids surprising jumps in the duration and overhead of routine
vacuuming with workloads where successive VACUUM operations consistently
have almost zero dead index tuples. The number of LP_DEAD items may
well accumulate over multiple VACUUM operations, before finally the
threshold is crossed and VACUUM performs conventional index vacuuming.
Even then, the optimization will have avoided a great deal of largely
unnecessary index vacuuming.
In the future we may teach VACUUM to skip index vacuuming on a per-index
basis, using a much more sophisticated approach. For now we only
consider the extreme cases, where we can be quite confident that index
vacuuming just isn't worth it using simple heuristics.
Also log information about how many heap pages have one or more LP_DEAD
items when autovacuum logging is enabled.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAD21AoD0SkE11fMw4jD4RENAwBMcw1wasVnwpJVw3tVqPOQgAw@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzmkebqPd4MVGuPTOS9bMFvp9MDs5cRTCOsv1rQJ3jCbXw@mail.gmail.com
This adds support for writing CREATE FUNCTION and CREATE PROCEDURE
statements for language SQL with a function body that conforms to the
SQL standard and is portable to other implementations.
Instead of the PostgreSQL-specific AS $$ string literal $$ syntax,
this allows writing out the SQL statements making up the body
unquoted, either as a single statement:
CREATE FUNCTION add(a integer, b integer) RETURNS integer
LANGUAGE SQL
RETURN a + b;
or as a block
CREATE PROCEDURE insert_data(a integer, b integer)
LANGUAGE SQL
BEGIN ATOMIC
INSERT INTO tbl VALUES (a);
INSERT INTO tbl VALUES (b);
END;
The function body is parsed at function definition time and stored as
expression nodes in a new pg_proc column prosqlbody. So at run time,
no further parsing is required.
However, this form does not support polymorphic arguments, because
there is no more parse analysis done at call time.
Dependencies between the function and the objects it uses are fully
tracked.
A new RETURN statement is introduced. This can only be used inside
function bodies. Internally, it is treated much like a SELECT
statement.
psql needs some new intelligence to keep track of function body
boundaries so that it doesn't send off statements when it sees
semicolons that are inside a function body.
Tested-by: Jaime Casanova <jcasanov@systemguards.com.ec>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/1c11f1eb-f00c-43b7-799d-2d44132c02d7@2ndquadrant.com
Add a failsafe mechanism that is triggered by VACUUM when it notices
that the table's relfrozenxid and/or relminmxid are dangerously far in
the past. VACUUM checks the age of the table dynamically, at regular
intervals.
When the failsafe triggers, VACUUM takes extraordinary measures to
finish as quickly as possible so that relfrozenxid and/or relminmxid can
be advanced. VACUUM will stop applying any cost-based delay that may be
in effect. VACUUM will also bypass any further index vacuuming and heap
vacuuming -- it only completes whatever remaining pruning and freezing
is required. Bypassing index/heap vacuuming is enabled by commit
8523492d, which made it possible to dynamically trigger the mechanism
already used within VACUUM when it is run with INDEX_CLEANUP off.
It is expected that the failsafe will almost always trigger within an
autovacuum to prevent wraparound, long after the autovacuum began.
However, the failsafe mechanism can trigger in any VACUUM operation.
Even in a non-aggressive VACUUM, where we're likely to not advance
relfrozenxid, it still seems like a good idea to finish off remaining
pruning and freezing. An aggressive/anti-wraparound VACUUM will be
launched immediately afterwards. Note that the anti-wraparound VACUUM
that follows will itself trigger the failsafe, usually before it even
begins its first (and only) pass over the heap.
The failsafe is controlled by two new GUCs: vacuum_failsafe_age, and
vacuum_multixact_failsafe_age. There are no equivalent reloptions,
since that isn't expected to be useful. The GUCs have rather high
defaults (both default to 1.6 billion), and are expected to generally
only be used to make the failsafe trigger sooner/more frequently.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAD21AoD0SkE11fMw4jD4RENAwBMcw1wasVnwpJVw3tVqPOQgAw@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzmgH3ySGYeC-m-eOBsa2=sDwa292-CFghV4rESYo39FsQ@mail.gmail.com
Use the in-core query id computation for pg_stat_activity,
log_line_prefix, and EXPLAIN VERBOSE.
Similar to other fields in pg_stat_activity, only the queryid from the
top level statements are exposed, and if the backends status isn't
active then the queryid from the last executed statements is displayed.
Add a %Q placeholder to include the queryid in log_line_prefix, which
will also only expose top level statements.
For EXPLAIN VERBOSE, if a query identifier has been computed, either by
enabling compute_query_id or using a third-party module, display it.
Bump catalog version.
Discussion: https://postgr.es/m/20210407125726.tkvjdbw76hxnpwfi@nol
Author: Julien Rouhaud
Reviewed-by: Alvaro Herrera, Nitin Jadhav, Zhihong Yu
Add compute_query_id GUC to control whether a query identifier should be
computed by the core (off by default). It's thefore now possible to
disable core queryid computation and use pg_stat_statements with a
different algorithm to compute the query identifier by using a
third-party module.
To ensure that a single source of query identifier can be used and is
well defined, modules that calculate a query identifier should throw an
error if compute_query_id specified to compute a query id and if a query
idenfitier was already calculated.
Discussion: https://postgr.es/m/20210407125726.tkvjdbw76hxnpwfi@nol
Author: Julien Rouhaud
Reviewed-by: Alvaro Herrera, Nitin Jadhav, Zhihong Yu
Teach VACUUM to truncate the line pointer array of each heap page when a
contiguous group of LP_UNUSED line pointers appear at the end of the
array -- these unused and unreferenced items are excluded. This process
occurs during VACUUM's second pass over the heap, right after LP_DEAD
line pointers on the page (those encountered/pruned during the first
pass) are marked LP_UNUSED.
Truncation avoids line pointer bloat with certain workloads,
particularly those involving continual range DELETEs and bulk INSERTs
against the same table.
Also harden heapam code to check for an out-of-range page offset number
in places where we weren't already doing so.
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAEze2WjgaQc55Y5f5CQd3L=eS5CZcff2Obxp=O6pto8-f0hC4w@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-Wzn6a64PJM1Ggzm=uvx2otsopJMhFQj_g1rAj4GWr3ZSzw@mail.gmail.com
Formerly we were pretty lax about what a custom GUC's name could
be; so long as it had at least one dot in it, we'd take it.
However, corner cases such as dashes or equal signs in the name
would cause various bits of functionality to misbehave. Rather
than trying to make the world perfectly safe for that, let's
just require that custom names look like "identifier.identifier",
where "identifier" means something that scan.l would accept
without double quotes.
Along the way, this patch refactors things slightly in guc.c
so that find_option() is responsible for reporting GUC-not-found
cases, allowing removal of duplicative code from its callers.
Per report from Hubert Depesz Lubaczewski. No back-patch,
since the consequences of the problem don't seem to warrant
changing behavior in stable branches.
Discussion: https://postgr.es/m/951335.1612910077@sss.pgh.pa.us
The code in bringetbitmap() simply added the whole matching page range
to the TID bitmap, as determined by pages_per_range, even if some of the
pages were beyond the end of the heap. The query then might fail with
an error like this:
ERROR: could not open file "base/20176/20228.2" (target block
262144): previous segment is only 131021 blocks
In this case, the relation has 262093 pages (131072 and 131021 pages),
but we're trying to acess block 262144, i.e. first block of the 3rd
segment. At that point _mdfd_getseg() notices the preceding segment is
incomplete, and fails.
Hitting this in practice is rather unlikely, because:
* Most indexes use power-of-two ranges, so segments and page ranges
align perfectly (segment end is also a page range end).
* The table size has to be just right, with the last segment being
almost full - less than one page range from full segment, so that the
last page range actually crosses the segment boundary.
* Prefetch has to be enabled. The regular page access checks that
pages are not beyond heap end, but prefetch does not. On older
releases (before 12) the execution stops after hitting the first
non-existent page, so the prefetch distance has to be sufficient
to reach the first page in the next segment to trigger the issue.
Since 12 it's enough to just have prefetch enabled, the prefetch
distance does not matter.
Fixed by not adding non-existent pages to the TID bitmap. Backpatch
all the way back to 9.6 (BRIN indexes were introduced in 9.5, but that
release is EOL).
Backpatch-through: 9.6
The previous implementation (from 9afffcb833) had an unnecessary check
on the boundaries of the enum which trigtered compile warnings. To clean
it up, move the pre-existing static assert to a central location and
call that.
Reported-By: Erik Rijkers
Reviewed-By: Michael Paquier
Discussion: https://postgr.es/m/1056399262.13159.1617793249020@webmailclassic.xs4all.nl
This reverts commit 9f984ba6d2.
It was making the buildfarm unhappy, apparently setting client_min_messages
in a regression test produces different output if log_statement='all'.
Another issue is that I now suspect the bit sortsupport function was in
fact not correct to call byteacmp(). Revert to investigate both of those
issues.