Commit Graph

4416 Commits

Author SHA1 Message Date
Thomas Munro cf112c1220 Remove dead pread and pwrite replacement code.
pread() and pwrite() are in SUSv2, and all targeted Unix systems have
them.

Previously, we defined pg_pread and pg_pwrite to emulate these function
with lseek() on old Unixen.  The names with a pg_ prefix were a reminder
of a portability hazard: they might change the current file position.
That hazard is gone, so we can drop the prefixes.

Since the remaining replacement code is Windows-only, move it into
src/port/win32p{read,write}.c, and move the declarations into
src/include/port/win32_port.h.

No need for vestigial HAVE_PREAD, HAVE_PWRITE macros as they were only
used for declarations in port.h which have now moved into win32_port.h.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Greg Stark <stark@mit.edu>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA+hUKGJ3LHeP9w5Fgzdr4G8AnEtJ=z=p6hGDEm4qYGEUX5B6fQ@mail.gmail.com
2022-08-05 09:49:21 +12:00
Thomas Munro 2b1f580ee2 Remove configure probes for symlink/readlink, and dead code.
symlink() and readlink() are in SUSv2 and all targeted Unix systems have
them.  We have partial emulation on Windows.  Code that raised runtime
errors on systems without it has been dead for years, so we can remove
that and also references to such systems in the documentation.

Define HAVE_READLINK and HAVE_SYMLINK macros on Unix.  Our Windows
replacement functions based on junction points can't be used for
relative paths or for non-directories, so the macros can be used to
check for full symlink support.  The places that deal with tablespaces
can just use symlink functions without checking the macros.  (If they
did check the macros, they'd need to provide an #else branch with a
runtime or compile time error, and it'd be dead code.)

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA+hUKGJ3LHeP9w5Fgzdr4G8AnEtJ=z=p6hGDEm4qYGEUX5B6fQ@mail.gmail.com
2022-08-05 09:22:56 +12:00
Daniel Gustafsson f8f20203c2 Rephrase comments to make them clearer
The use of "we" when referring to the active backend might be
misunderstood, so rephrase to make it clearer who is performing
the actions discussed in the comment.

Author: Junwang Zhao <zhjwpku@gmail.com>
Reviewed-by: Erikjan Rijkers <er@xs4all.nl>
Reviewed-by: Robert Treat <rob@xzilla.net>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAEG8a3LRSMqkvjiURiJoSi4aGWORpiXUmUfQQK5PaD6WfPzu3w@mail.gmail.com
2022-08-04 16:30:06 +02:00
Tom Lane e09d7a1262 Improve speed of hash index build.
In the initial data sort, if the bucket numbers are the same then
next sort on the hash value.  Because index pages are kept in
hash value order, this gains a little speed by allowing the
eventual tuple insertions to be done sequentially, avoiding repeated
data movement within PageAddItem.  This seems to be good for overall
speedup of 5%-9%, depending on the incoming data.

Simon Riggs, reviewed by Amit Kapila

Discussion: https://postgr.es/m/CANbhV-FG-1ZNMBuwhUF7AxxJz3u5137dYL-o6hchK1V_dMw86g@mail.gmail.com
2022-07-28 14:34:32 -04:00
Alvaro Herrera 9e4f914b5e
Fix replay of create database records on standby
Crash recovery on standby may encounter missing directories
when replaying database-creation WAL records.  Prior to this
patch, the standby would fail to recover in such a case;
however, the directories could be legitimately missing.
Consider the following sequence of commands:

    CREATE DATABASE
    DROP DATABASE
    DROP TABLESPACE

If, after replaying the last WAL record and removing the
tablespace directory, the standby crashes and has to replay the
create database record again, crash recovery must be able to continue.

A fix for this problem was already attempted in 49d9cfc68b, but it
was reverted because of design issues.  This new version is based
on Robert Haas' proposal: any missing tablespaces are created
during recovery before reaching consistency.  Tablespaces
are created as real directories, and should be deleted
by later replay.  CheckRecoveryConsistency ensures
they have disappeared.

The problems detected by this new code are reported as PANIC,
except when allow_in_place_tablespaces is set to ON, in which
case they are WARNING.  Apart from making tests possible, this
gives users an escape hatch in case things don't go as planned.

Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Asim R Praveen <apraveen@pivotal.io>
Author: Paul Guo <paulguo@gmail.com>
Reviewed-by: Anastasia Lubennikova <lubennikovaav@gmail.com> (older versions)
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com> (older versions)
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Diagnosed-by: Paul Guo <paulguo@gmail.com>
Discussion: https://postgr.es/m/CAEET0ZGx9AvioViLf7nbR_8tH9-=27DN5xWJ2P9-ROH16e4JUA@mail.gmail.com
2022-07-28 08:40:06 +02:00
Michael Paquier ffd1b6bb6f Add overflow protection for block-related data in WAL records
XLogRecordBlockHeader, the header holding the information for the data
related to a block, tracks the length of the data appended to the WAL
record with data_length (uint16).  This limitation in size was not
enforced by the public routine in charge of registering the data
assembled later to form the WAL record inserted, XLogRegisterBufData().
Incorrectly used, it could lead to the generation of records with some
of its data overflowed.  This commit adds some safeguards to prevent
that for the block data, complaining immediately if attempting to add to
a record block information with a size larger than UINT16_MAX, which is
the limit implied by the internal logic.

Note that this also adjusts XLogRegisterData() and XLogRegisterBufData()
so as the length of the WAL record data given by the caller is unsigned,
matching with what gets stored in XLogRecData->len.

Extracted from a larger patch by the same author.  The original patch
includes more protections when assembling a record in full that will be
looked at separately later.

Author: Matthias van de Meent
Reviewed-by: Andres Freund, Heikki Linnakangas, Michael Paquier, David
Zhang
Discussion: https://postgr.es/m/CAEze2WgGiw+LZt+vHf8tWqB_6VxeLsMeoAuod0N=ij1q17n5pw@mail.gmail.com
2022-07-27 13:35:40 +09:00
Tom Lane f92944137c Force immediate commit after CREATE DATABASE etc in extended protocol.
We have a few commands that "can't run in a transaction block",
meaning that if they complete their processing but then we fail
to COMMIT, we'll be left with inconsistent on-disk state.
However, the existing defenses for this are only watertight for
simple query protocol.  In extended protocol, we didn't commit
until receiving a Sync message.  Since the client is allowed to
issue another command instead of Sync, we're in trouble if that
command fails or is an explicit ROLLBACK.  In any case, sitting
in an inconsistent state while waiting for a client message
that might not come seems pretty risky.

This case wasn't reachable via libpq before we introduced pipeline
mode, but it's always been an intended aspect of extended query
protocol, and likely there are other clients that could reach it
before.

To fix, set a flag in PreventInTransactionBlock that tells
exec_execute_message to force an immediate commit.  This seems
to be the approach that does least damage to existing working
cases while still preventing the undesirable outcomes.

While here, add some documentation to protocol.sgml that explicitly
says how to use pipelining.  That's latent in the existing docs if
you know what to look for, but it's better to spell it out; and it
provides a place to document this new behavior.

Per bug #17434 from Yugo Nagata.  It's been wrong for ages,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/17434-d9f7a064ce2a88a3@postgresql.org
2022-07-26 13:07:03 -04:00
Amit Kapila 857dd35348 Eliminate duplicate code in table.c.
Additionally improve the error message similar to how it was done in
2ed532ee8c.

Author: Junwang Zhao, Aleksander Alekseev
Reviewed-by: Amit Kapila, Alvaro Herrera, Kyotaro Horiguchi
Discussion: https://postgr.es/m/CAEG8a3KbVtBm_BYf5tGsKHvmMieQVsq_jBPOg75VViQB7ACL8Q%40mail.gmail.com
2022-07-26 08:22:53 +05:30
Fujii Masao 2387f52962 Remove useless arguments in ReadCheckpointRecord().
This commit removes two arguments "report" and "whichChkpt"
in ReadCheckpointRecord().

"report" is obviously useless because it's always true, i.e., there are
two callers of the function and they always specify true as "report".
Commit 1d919de5eb removed the only call with "report" = false.

"whichChkpt" indicated where the specified checkpoint location
came from, pg_control or backup_label. This information was used
to report different error messages depending on where the invalid
checkpoint record came from, when it was found.
But ReadCheckpointRecord() doesn't need to do that because
its callers already do that and users can still identify where
the invalid checkpoint record came from, by reading such log messages.
Also when "whichChkpt" was 0, the word "primary checkpoint" was used
in the log message and could confuse users because the concept of
primary and secondary checkpoints was already removed before.
These are why this commit removes "whichChkpt" argument.

Author: Fujii Masao
Reviewed-by: Bharath Rupireddy, Kyotaro Horiguchi
Discussion: https://postgr.es/m/fa2e12eb-81c3-0717-0272-755f8a81c8f2@oss.nttdata.com
2022-07-25 10:59:38 +09:00
Thomas Munro 5344723755 Remove unnecessary Windows-specific basebackup code.
Commit c6f2f016 added an explicit check for a Windows "junction point".
That turned out to be needed only because get_dirent_type() was busted
on Windows.  It's been fixed by commit 9d3444dc, so remove it.

Add a TAP-test to demonstrate that in-place tablespaces are copied by
pg_basebackup.  This exercises the codepath that would fail before
c6f2f016 on Windows, and shows that it still doesn't fail now that we're
using get_dirent_type() on both Windows and Unix.

Back-patch to 15, where in-place tablespaces arrived and caused this
problem (ie directories where previously only symlinks were expected).

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CA%2BhUKGLzLK4PUPx0_AwXEWXOYAejU%3D7XpxnYE55Y%2Be7hB2N3FA%40mail.gmail.com
2022-07-22 17:41:47 +12:00
Thomas Munro a1b56090eb Remove O_FSYNC and associated macros.
O_FSYNC was a pre-POSIX way of spelling O_SYNC, supported since commit
9d645fd84c for non-conforming operating systems of the time.  It's not
needed on any modern system.  We can just use standard O_SYNC directly
if it exists (= all targeted systems except Windows), and get rid of our
OPEN_SYNC_FLAG macro.

Similarly for standard O_DSYNC, we can just use that directly if it
exists (= all targeted systems except DragonFlyBSD), and get rid of our
OPEN_DATASYNC_FLAG macro.

We still avoid choosing open_datasync as a default value for
wal_sync_method if O_DSYNC has the same value as O_SYNC (= only
OpenBSD), so there is no change in default behavior.

Discussion: https://postgr.es/m/CA%2BhUKGJE7y92NY7FG2ftUbZUaqohBU65_Ys_7xF5mUHo4wirTQ%40mail.gmail.com
2022-07-22 12:41:17 +12:00
Thomas Munro 4f1f5a7f85 Remove fls(), use pg_leftmost_one_pos32() instead.
Commit 4f658dc8 provided the traditional BSD fls() function in
src/port/fls.c so it could be used in several places.  Later we added a
bunch of similar facilities in pg_bitutils.h, based on compiler
builtins that map to hardware instructions.  It's a bit confusing to
have both 1-based and 0-based variants of this operation in use in
different parts of the tree, and neither is blessed by a standard.
Let's drop fls.c and the configure probe, and reuse the newer code.

Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CA%2BhUKG%2B7dSX1XF8yFGmYk-%3D48dbjH2kmzZj16XvhbrWP-9BzRg%40mail.gmail.com
2022-07-22 10:41:50 +12:00
Fujii Masao b24b2be119 Fix assertion failure and segmentation fault in backup code.
When a non-exclusive backup is canceled, do_pg_abort_backup() is called
and resets some variables set by pg_backup_start (pg_start_backup in v14
or before). But previously it forgot to reset the session state indicating
whether a non-exclusive backup is in progress or not in this session.

This issue could cause an assertion failure when the session running
BASE_BACKUP is terminated after it executed pg_backup_start and
pg_backup_stop (pg_stop_backup in v14 or before). Also it could cause
a segmentation fault when pg_backup_stop is called after BASE_BACKUP
in the same session is canceled.

This commit fixes the issue by making do_pg_abort_backup reset
that session state.

Back-patch to all supported branches.

Author: Fujii Masao
Reviewed-by: Kyotaro Horiguchi, Masahiko Sawada, Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/3374718f-9fbf-a950-6d66-d973e027f44c@oss.nttdata.com
2022-07-20 09:57:01 +09:00
Peter Eisentraut 9fd45870c1 Replace many MemSet calls with struct initialization
This replaces all MemSet() calls with struct initialization where that
is easily and obviously possible.  (For example, some cases have to
worry about padding bits, so I left those.)

(The same could be done with appropriate memset() calls, but this
patch is part of an effort to phase out MemSet(), so it doesn't touch
memset() calls.)

Reviewed-by: Ranier Vilela <ranier.vf@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/9847b13c-b785-f4e2-75c3-12ec77a3b05c@enterprisedb.com
2022-07-16 08:50:49 +02:00
Fujii Masao 62c46eee22 Add checkpoint and REDO LSN to log_checkpoints message.
It is useful for debugging purposes to report the checkpoint LSN and
REDO LSN in log_checkpoints message. It can give more context while
analyzing checkpoint-related issues. pg_controldata reports the last
checkpoint LSN and REDO LSN, but having this information alongside
the log message helps analyze issues that happened previously,
connect the dots and identify the root cause.

Author: Bharath Rupireddy, Kyotaro Horiguchi
Reviewed-by: Michael Paquier, Julien Rouhaud, Nathan Bossart, Fujii Masao, Greg Stark
Discussion: https://postgr.es/m/CALj2ACWt6kqriAHrO+AJj+OmP=suwbktHT5JoYAn-nqZe2gd2g@mail.gmail.com
2022-07-07 22:37:54 +09:00
David Rowley 0229106afa Overload index_form_tuple to allow the memory context to be supplied
40af10b57 changed things so we make use of a generation memory context for
storing tuples to be sorted by tuplesort.c. That change does not play
nicely with the changes made in 9f03ca915 (back in 2014). That commit
changed things so that index_form_tuple() is called while switched into
the tuplestore's tuplecontext. In order to fetch the tuple from the index,
index_form_tuple() must do various memory allocations which are unrelated
to the storage of the final returned tuple. Although all of these
allocations are pfree'd, the fact that we now use a generation context
means that the memory for these pfree'd allocations won't be used again by
any other allocation due to generation.c's lack of freelists.  This could
result in sorts used for building indexes exceeding maintenance_work_mem
by a very large amount.

Here we fix it so we no longer allocate anything apart from the tuple
itself into the generation context by adding a new version of
index_form_tuple named index_form_tuple_context, which can be called to
specify the MemoryContext to allocate the tuple into.

Discussion: https://postgr.es/m/CAApHDvrHQkiFRHiGiAS-LMOvJN-eK-s762=tVzBz8ZqUea-a_A@mail.gmail.com
Backpatch-through: 15, where 40af10b57 was added.
2022-07-07 08:14:00 +12:00
Robert Haas b0a55e4329 Change internal RelFileNode references to RelFileNumber or RelFileLocator.
We have been using the term RelFileNode to refer to either (1) the
integer that is used to name the sequence of files for a certain relation
within the directory set aside for that tablespace/database combination;
or (2) that value plus the OIDs of the tablespace and database; or
occasionally (3) the whole series of files created for a relation
based on those values. Using the same name for more than one thing is
confusing.

Replace RelFileNode with RelFileNumber when we're talking about just the
single number, i.e. (1) from above, and with RelFileLocator when we're
talking about all the things that are needed to locate a relation's files
on disk, i.e. (2) from above. In the places where we refer to (3) as
a relfilenode, instead refer to "relation storage".

Since there is a ton of SQL code in the world that knows about
pg_class.relfilenode, don't change the name of that column, or of other
SQL-facing things that derive their name from it.

On the other hand, do adjust closely-related internal terminology. For
example, the structure member names dbNode and spcNode appear to be
derived from the fact that the structure itself was called RelFileNode,
so change those to dbOid and spcOid. Likewise, various variables with
names like rnode and relnode get renamed appropriately, according to
how they're being used in context.

Hopefully, this is clearer than before. It is also preparation for
future patches that intend to widen the relfilenumber fields from its
current width of 32 bits. Variables that store a relfilenumber are now
declared as type RelFileNumber rather than type Oid; right now, these
are the same, but that can now more easily be changed.

Dilip Kumar, per an idea from me. Reviewed also by Andres Freund.
I fixed some whitespace issues, changed a couple of words in a
comment, and made one other minor correction.

Discussion: http://postgr.es/m/CA+TgmoamOtXbVAQf9hWFzonUo6bhhjS6toZQd7HZ-pmojtAmag@mail.gmail.com
Discussion: http://postgr.es/m/CA+Tgmobp7+7kmi4gkq7Y+4AM9fTvL+O1oQ4-5gFTT+6Ng-dQ=g@mail.gmail.com
Discussion: http://postgr.es/m/CAFiTN-vTe79M8uDH1yprOU64MNFE+R3ODRuA+JWf27JbhY4hJw@mail.gmail.com
2022-07-06 11:39:09 -04:00
Peter Eisentraut 16d52fc89d Refactor sending of DataRow messages in replication protocol
Some routines open-coded the construction of DataRow messages.  Use
TupOutputState struct and associated functions instead, which was
already done in some places.

SendTimeLineHistory() is a bit more complicated and isn't converted by
this.

Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/7e4fdbdc-699c-4cd0-115d-fb78a957fc22@enterprisedb.com
2022-07-06 08:42:56 +02:00
Michael Paquier dac1ff3090 Replace durable_rename_excl() by durable_rename(), take two
durable_rename_excl() attempts to avoid overwriting any existing files
by using link() and unlink(), and it falls back to rename() on some
platforms (aka WIN32), which offers no such overwrite protection.  Most
callers use durable_rename_excl() just in case there is an existing
file, but in practice there shouldn't be one (see below for more
details).

Furthermore, failures during durable_rename_excl() can result in
multiple hard links to the same file.  As per Nathan's tests, it is
possible to end up with two links to the same file in pg_wal after a
crash just before unlink() during WAL recycling.  Specifically, the test
produced links to the same file for the current WAL file and the next
one because the half-recycled WAL file was re-recycled upon restarting,
leading to WAL corruption.

This change replaces all the calls of durable_rename_excl() to
durable_rename().  This removes the protection against accidentally
overwriting an existing file, but some platforms are already living
without it and ordinarily there shouldn't be one.  The function itself
is left around in case any extensions are using it.  It will be removed
on HEAD via a follow-up commit.

Here is a summary of the existing callers of durable_rename_excl() (see
second discussion link at the bottom), replaced by this commit.  First,
basic_archive used it to avoid overwriting an archive concurrently
created by another server, but as mentioned above, it will still
overwrite files on some platforms.  Second, xlog.c uses it to recycle
past WAL segments, where an overwrite should not happen (origin of the
change at f0e37a8) because there are protections about the WAL segment
to select when recycling an entry.  The third and last area is related
to the write of timeline history files.  writeTimeLineHistory() will
write a new timeline history file at the end of recovery on promotion,
so there should be no such files for the same timeline.
What remains is writeTimeLineHistoryFile(), that can be used in parallel
by a WAL receiver and the startup process, and some digging of the
buildfarm shows that EEXIST from a WAL receiver can happen with an error
of "could not link file \"pg_wal/xlogtemp.NN\" to \"pg_wal/MM.history\",
which would cause an automatic restart of the WAL receiver as it is
promoted to FATAL, hence this should improve the stability of the WAL
receiver as rename() would overwrite an existing TLI history file
already fetched by the startup process at recovery.

This is a bug fix, but knowing the unlikeliness of the problem involving
one or more crashes at an exceptionally bad moment, no backpatch is
done.  Also, I want to be careful with such changes (aaa3aed did the
opposite of this change by removing HAVE_WORKING_LINK so as Windows
would do a link() rather than a rename() but this was not
concurrent-safe).  A backpatch could be revisited in the future.  This
is the second time this change is attempted, ccfbd92 being the first
one, but this time no assertions are added for the case of a TLI history
file written concurrently by the WAL receiver or the startup process
because we can expect one to exist (some of the TAP tests are able to
trigger with a proper timing).

Author: Nathan Bossart
Reviewed-by: Robert Haas, Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20220407182954.GA1231544@nathanxps13
Discussion: https://postgr.es/m/Ym6GZbqQdlalSKSG@paquier.xyz
2022-07-05 10:16:12 +09:00
Peter Eisentraut 2ce648f750 Refactor sending of RowDescription messages in replication protocol
Some routines open-coded the construction of RowDescription messages.
Instead, we have support for doing this using tuple descriptors and
DestRemoteSimple, so use that instead.

Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/7e4fdbdc-699c-4cd0-115d-fb78a957fc22@enterprisedb.com
2022-07-04 19:43:58 +02:00
Peter Eisentraut d746021de1 Add construct_array_builtin, deconstruct_array_builtin
There were many calls to construct_array() and deconstruct_array() for
built-in types, for example, when dealing with system catalog columns.
These all hardcoded the type attributes necessary to pass to these
functions.

To simplify this a bit, add construct_array_builtin(),
deconstruct_array_builtin() as wrappers that centralize this hardcoded
knowledge.  This simplifies many call sites and reduces the amount of
hardcoded stuff that is spread around.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/2914356f-9e5f-8c59-2995-5997fc48bcba%40enterprisedb.com
2022-07-01 11:23:15 +02:00
Michael Paquier 33bd4698c1 Fix code comments still referring to pg_start/stop_backup()
pg_start_backup() and pg_stop_backup() have been respectively renamed to
pg_backup_start() and pg_backup_stop() as of 39969e2, but a few comments
did not get the call.

Reviewed-by: Kyotaro Horiguchi, David Steele
Discussion: https://postgr.es/m/YrqGlj1+4DF3dbZ/@paquier.xyz
2022-07-01 09:37:17 +09:00
Peter Eisentraut 258f48f858 Change some unnecessary MemSet calls
MemSet() with a value other than 0 just falls back to memset(), so the
indirection is unnecessary if the value is constant and not 0.  Since
there is some interest in getting rid of MemSet(), this gets some easy
cases out of the way.  (There are a few MemSet() calls that I didn't
change to maintain the consistency with their surrounding code.)

Discussion: https://www.postgresql.org/message-id/flat/CAEudQApCeq4JjW1BdnwU=m=-DvG5WyUik0Yfn3p6UNphiHjj+w@mail.gmail.com
2022-07-01 00:16:38 +02:00
Tom Lane 82d0ffae32 pgindent run prior to branching v15.
pgperltidy and reformat-dat-files too.  Not many changes.
2022-06-30 11:03:03 -04:00
Heikki Linnakangas adf6d5dfb2 Fix visibility check when XID is committed in CLOG but not in procarray.
TransactionIdIsInProgress had a fast path to return 'false' if the
single-item CLOG cache said that the transaction was known to be
committed. However, that was wrong, because a transaction is first
marked as committed in the CLOG but doesn't become visible to others
until it has removed its XID from the proc array. That could lead to an
error:

    ERROR:  t_xmin is uncommitted in tuple to be updated

or for an UPDATE to go ahead without blocking, before the previous
UPDATE on the same row was made visible.

The window is usually very short, but synchronous replication makes it
much wider, because the wait for synchronous replica happens in that
window.

Another thing that makes it hard to hit is that it's hard to get such
a commit-in-progress transaction into the single item CLOG cache.
Normally, if you call TransactionIdIsInProgress on such a transaction,
it determines that the XID is in progress without checking the CLOG
and without populating the cache. One way to prime the cache is to
explicitly call pg_xact_status() on the XID. Another way is to use a
lot of subtransactions, so that the subxid cache in the proc array is
overflown, making TransactionIdIsInProgress rely on pg_subtrans and
CLOG checks.

This has been broken ever since it was introduced in 2008, but the race
condition is very hard to hit, especially without synchronous
replication. There were a couple of reports of the error starting from
summer 2021, but no one was able to find the root cause then.

TransactionIdIsKnownCompleted() is now unused. In 'master', remove it,
but I left it in place in backbranches in case it's used by extensions.

Also change pg_xact_status() to check TransactionIdIsInProgress().
Previously, it only checked the CLOG, and returned "committed" before
the transaction was actually made visible to other queries. Note that
this also means that you cannot use pg_xact_status() to reproduce the
bug anymore, even if the code wasn't fixed.

Report and analysis by Konstantin Knizhnik. Patch by Simon Riggs, with
the pg_xact_status() change added by me.

Author: Simon Riggs
Reviewed-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/flat/4da7913d-398c-e2ad-d777-f752cf7f0bbb%40garret.ru
2022-06-27 08:21:08 +03:00
Tomas Vondra e3fcca0d0d Revert changes in HOT handling of BRIN indexes
This reverts commits 5753d4ee32 and fe60b67250 that modified HOT to
ignore BRIN indexes. The commit message for 5753d4ee32 claims that:

    When determining whether an index update may be skipped by using
    HOT, we can ignore attributes indexed only by BRIN indexes. There
    are no index pointers to individual tuples in BRIN, and the page
    range summary will be updated anyway as it relies on visibility
    info.

This is partially incorrect - it's true BRIN indexes don't point to
individual tuples, so HOT chains are not an issue, but the visibitlity
info is not sufficient to keep the index up to date. This can easily
result in corrupted indexes, as demonstrated in the hackers thread.

This does not mean relaxing the HOT restrictions for BRIN is a lost
cause, but it needs to handle the two aspects (allowing HOT chains and
updating the page range summaries) as separate. But that requires a
major changes, and it's too late for that in the current dev cycle.

Reported-by: Tomas Vondra
Discussion: https://postgr.es/m/05ebcb44-f383-86e3-4f31-0a97a55634cf@enterprisedb.com
2022-06-16 15:02:49 +02:00
Tom Lane 7ab5b4eb48 Be more careful about GucSource for internally-driven GUC settings.
The original advice for hard-wired SetConfigOption calls was to use
PGC_S_OVERRIDE, particularly for PGC_INTERNAL GUCs.  However,
that's really overkill for PGC_INTERNAL GUCs, since there is no
possibility that we need to override a user-provided setting.
Instead use PGC_S_DYNAMIC_DEFAULT in most places, so that the
value will appear with source = 'default' in pg_settings and thereby
not be shown by psql's new \dconfig command.  The one exception is
that when changing in_hot_standby in a hot-standby session, we still
use PGC_S_OVERRIDE, because people felt that seeing that in \dconfig
would be a good thing.

Similarly use PGC_S_DYNAMIC_DEFAULT for the auto-tune value of
wal_buffers (if possible, that is if wal_buffers wasn't explicitly
set to -1), and for the typical 2MB value of max_stack_depth.

In combination these changes remove four not-very-interesting
entries from the typical output of \dconfig, all of which people
fingered as "why is that showing up?" in the discussion thread.

Discussion: https://postgr.es/m/3118455.1649267333@sss.pgh.pa.us
2022-06-08 13:26:18 -04:00
Tom Lane dd1c8dd101 Silence compiler warnings from some older compilers.
Since a117cebd6, some older gcc versions issue "variable may be used
uninitialized in this function" complaints for brin_summarize_range.
Silence that using the same coding pattern as in bt_index_check_internal;
arguably, a117cebd6 had too narrow a view of which compilers might give
trouble.

Nathan Bossart and Tom Lane.  Back-patch as the previous commit was.

Discussion: https://postgr.es/m/20220601163537.GA2331988@nathanxps13
2022-06-01 17:21:45 -04:00
Amit Kapila 0a050ee000 Fix typo in hash README.
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+Pu-V22PiJF2ym9_NVZe-+qnycfyEX24dZm=7URWhDHJ3w@mail.gmail.com
2022-05-31 14:37:41 +05:30
Michael Paquier b4529005fd Revert "Add single-item cache when looking at topmost XID of a subtrans XID"
This reverts commit 06f5295 as per issues with this approach, both in
terms of efficiency impact and stability.  First, contrary to the
single-item cache for transaction IDs in transam.c, the cache may finish
by not be hit for a long time, and without an invalidation mechanism to
clear it, it would cause inconsistent results on wraparound for
example.  Second, the use of SubTransGetTopmostTransaction() for the
caching has a limited impact on performance.  SubTransGetParent() could
have more impact, though the benchmarking of the single-item approach
still needs to be proved, particularly under the conditions where SLRU
lookups are stressed in parallel with overflowed snapshots (aka more
than 64 subxids generated, for example).

After discussion with Andres Freund.

Discussion: https://postgr.es/m/20220524235250.gtt3uu5zktfkr4hv@alap3.anarazel.de
2022-05-28 15:02:08 +09:00
Andres Freund 09cd33f47b Add 'static' to file-local variables missing it.
Noticed when comparing the set of exported symbols without / with
-fvisibility=hidden after adding PGDLLIMPORT to intentionally exported
symbols.

Discussion: https://postgr.es/m/20220512164513.vaheofqp2q24l65r@alap3.anarazel.de
2022-05-12 12:39:33 -07:00
Tom Lane 23e7b38bfe Pre-beta mechanical code beautification.
Run pgindent, pgperltidy, and reformat-dat-files.
I manually fixed a couple of comments that pgindent uglified.
2022-05-12 15:17:30 -04:00
David Rowley c90c16591c Fix some incorrect preprocessor tests in tuplesort specializations
697492434 added 3 new quicksort specialization functions for common
datatypes.

That commit was not very consistent in how it would determine if we're
compiling for 32-bit or 64-bit machines.  It would sometimes use
USE_FLOAT8_BYVAL and at other times check if SIZEOF_DATUM == 8.  This
could cause theoretical problems due to the way USE_FLOAT8_BYVAL is now
defined based on SIZEOF_VOID_P >= 8.  If pointers for some reason were
ever larger than 8-bytes then we'd end up doing 32-bit comparisons
mistakenly.  Let's just always check SIZEOF_DATUM >= 8.

It also seems that ssup_datum_signed_cmp is just never used on 32-bit
builds, so let's just ifdef that out to make sure we never accidentally
use that comparison function on such machines.  This also allows us to
ifdef out 1 of the 3 new specialization quicksort functions in 32-bit
builds which seems to shrink down the binary by over 4KB on my machine.

In passing, also add the missing DatumGetInt32() / DatumGetInt64() macros
in the comparison functions.

Discussion: https://postgr.es/m/CAApHDvqcQExRhtRa9hJrJB_5egs3SUfOcutP3m+3HO8A+fZTPA@mail.gmail.com
Reviewed-by: John Naylor
2022-05-11 11:38:13 +12:00
Noah Misch a117cebd63 Make relation-enumerating operations be security-restricted operations.
When a feature enumerates relations and runs functions associated with
all found relations, the feature's user shall not need to trust every
user having permission to create objects.  BRIN-specific functionality
in autovacuum neglected to account for this, as did pg_amcheck and
CLUSTER.  An attacker having permission to create non-temp objects in at
least one schema could execute arbitrary SQL functions under the
identity of the bootstrap superuser.  CREATE INDEX (not a
relation-enumerating operation) and REINDEX protected themselves too
late.  This change extends to the non-enumerating amcheck interface.
Back-patch to v10 (all supported versions).

Sergey Shinderuk, reviewed (in earlier versions) by Alexander Lakhin.
Reported by Alexander Lakhin.

Security: CVE-2022-1552
2022-05-09 08:35:08 -07:00
Michael Paquier 7863ee4def Fix control file update done in restartpoints still running after promotion
If a cluster is promoted (aka the control file shows a state different
than DB_IN_ARCHIVE_RECOVERY) while CreateRestartPoint() is still
processing, this function could miss an update of the control file for
"checkPoint" and "checkPointCopy" but still do the recycling and/or
removal of the past WAL segments, assuming that the to-be-updated LSN
values should be used as reference points for the cleanup.  This causes
a follow-up restart attempting crash recovery to fail with a PANIC on a
missing checkpoint record if the end-of-recovery checkpoint triggered by
the promotion did not complete while the cluster abruptly stopped or
crashed before the completion of this checkpoint.  The PANIC would be
caused by the redo LSN referred in the control file as located in a
segment already gone, recycled by the previous restartpoint with
"checkPoint" out-of-sync in the control file.

This commit fixes the update of the control file during restartpoints so
as "checkPoint" and "checkPointCopy" are updated even if the cluster has
been promoted while a restartpoint is running, to be on par with the set
of WAL segments actually recycled in the end of CreateRestartPoint().

This problem exists in all the stable branches.  However, commit
7ff23c6, by removing the last call of CreateCheckPoint() from the
startup process, has made this bug much easier to reason about as
concurrent checkpoints are not possible anymore.  No backpatch is done
yet, mostly out of caution from me as a point release is close by, but
we need to think harder about the case of concurrent checkpoints at
promotion if the bgwriter is not considered as running by the startup
process in ~v14, so this change is done only on HEAD for the moment.

Reported-by: Fujii Masao, Rui Zhao
Author: Kyotaro Horiguchi
Reviewed-by: Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/20220316.102444.2193181487576617583.horikyota.ntt@gmail.com
2022-05-09 08:39:59 +09:00
Peter Eisentraut 2e77180d45 Fix incorrect format placeholders 2022-05-04 07:57:39 +02:00
Jeff Davis ed57cac84d pg_walinspect: fix case where flush LSN is in the middle of a record.
Instability in the test for pg_walinspect revealed that
pg_get_wal_records_info_till_end_of_wal(x) would try to decode all the
records with a start LSN earlier than the flush LSN, even though that
might include a partial record at the end of the range. In that case,
read_local_xlog_page_no_wait() would return NULL when it tried to read
past the flush LSN, which would be interpreted as an error by the
caller. That caused a test failure only on a BF animal that had been
restarted recently, but could be expected to happen in the wild quite
easily depending on the alignment of various parameters.

Fix by using private data in read_local_xlog_page_no_wait() to signal
end-of-wal to the caller, so that it can be properly distinguished
from a real error.

Discussion: https://postgr.es/m/Ymd/e5eeZMNAkrXo%40paquier.xyz
Discussion: https://postgr.es/m/111657.1650910309@sss.pgh.pa.us

Authors: Thomas Munro, Bharath Rupireddy.
2022-04-30 09:05:32 -07:00
Michael Paquier 55b5686511 Revert recent changes with durable_rename_excl()
This reverts commits 2c902bb and ccfbd92.  Per buildfarm members
kestrel, rorqual and calliphoridae, the assertions checking that a TLI
history file should not exist when created by a WAL receiver have been
failing, and switching to durable_rename() over durable_rename_excl()
would cause the newest TLI history file to overwrite the existing one.
We need to think harder about such cases, so revert the new logic for
now.

Note that all the failures have been reported in the test
025_stuck_on_old_timeline.

Discussion: https://postgr.es/m/511362.1651116498@sss.pgh.pa.us
2022-04-28 13:08:16 +09:00
Michael Paquier ccfbd9287d Replace existing durable_rename_excl() calls with durable_rename()
durable_rename_excl() attempts to avoid overwriting any existing files
by using link() and unlink(), falling back to rename() on some platforms
(e.g., Windows where link() followed by unlink() is not concurrent-safe,
see 909b449).  Most callers of durable_rename_excl() use it just in case
there is an existing file, but it happens that for all of them we never
expect a target file to exist (WAL segment recycling, creation of
timeline history file and basic_archive).

basic_archive used durable_rename_excl() to avoid overwriting an archive
concurrently created by another server.  Now, there is a stat() call to
avoid overwriting an existing archive a couple of lines above, so note
that this change opens a small TOCTOU window in this module between the
stat() call and durable_rename().

Furthermore, as mentioned in the top comment of durable_rename_excl(),
this routine can result in multiple hard links to the same file and data
corruption, with two or more links to the same file in pg_wal/ if a
crash happens before the unlink() call during WAL recycling.
Specifically, this would produce links to the same file for the current
WAL file and the next one because the half-recycled WAL file was
re-recycled during crash recovery of a follow-up cluster restart.

This change replaces all calls to durable_rename_excl() with
durable_rename().  This removes the protection against accidentally
overwriting an existing file, but some platforms are already living
without it, and all those code paths never expect an existing file (a
couple of assertions are added to check after that, in case).

This is a bug fix, but knowing the unlikeliness of the problem involving
one of more crashes at an exceptionally bad moment, no backpatch is
done.  This could be revisited in the future.

Author: Nathan Bossart
Reviewed-by: Robert Haas, Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20220407182954.GA1231544@nathanxps13
2022-04-28 10:11:45 +09:00
Tom Lane 2cb1272445 Rethink method for assigning OIDs to the template0 and postgres DBs.
Commit aa0105141 assigned fixed OIDs to template0 and postgres
in a very ad-hoc way.  Notably, instead of teaching Catalog.pm
about these OIDs, the unused_oids script was just hacked to
not show them as unused.  That's problematic since, for example,
duplicate_oids wouldn't report any future conflict.  Hence,
invent a macro DECLARE_OID_DEFINING_MACRO() that can be used to
define an OID that is known to Catalog.pm and will participate
in duplicate-detection as well as renumbering by renumber_oids.pl.
(We don't anticipate renumbering these particular OIDs, but we
might as well build out all the Catalog.pm infrastructure while
we're here.)

Another issue is that aa0105141 neglected to touch IsPinnedObject,
with the result that it now claimed template0 and postgres are
pinned.  The right thing to do there seems to be to teach it that
no database is pinned, since in fact DROP DATABASE doesn't check
for pinned-ness (and at least for these cases, that is an
intentional choice).  It's not clear whether this wrong answer
had any visible effect, but perhaps it could have resulted in
erroneous management of dependency entries.

In passing, rename the TemplateDbOid macro to Template1DbOid
to reduce confusion (likely we should have done that way back
when we invented template0, but we didn't), and rename the
OID macros for template0 and postgres to have a similar style.

There are no changes to postgres.bki here, so no need for a
catversion bump.

Discussion: https://postgr.es/m/2935358.1650479692@sss.pgh.pa.us
2022-04-21 16:23:15 -04:00
Peter Geoghegan ba6af6aa0b vacuumlazy.c: MultiXactIds are MXIDs, not XMIDs.
Oversights in commits 0b018fab and f3c15cbe.
2022-04-20 18:29:02 -07:00
Thomas Munro acf1dd4234 Don't retry restore_command while reading ahead.
Suppress further attempts to read ahead in the WAL if we run out of
data, until the records already decoded have been replayed.  This
restores the traditional behavior for continuous archive recovery, which
is to retry the failing restore_command only every 5 seconds.  With the
coding in 5dc0418f, we would start retrying every time through the
recovery loop when our WAL decoding window hit the end of the current
segment and we tried to look ahead into a not-yet-available next file.
That was very slow.

Also change the no_readahead_until mechanism to use <= rather than <,
which seems more useful.  Otherwise we'd either get one extra unwanted
retry of restore_command, or we'd need to add 1 to an LSN.

No change in behavior for regular streaming.  That was already limited
by the flushedUpto variable, which won't be updated until we replay what
we have already.

Reported by Andres Freund while analyzing the failure of a TAP test on
build farm animal skink (investigation ongoing but probably due to
otherwise unrelated timing bugs triggered by this slowness magnified by
valgrind).

Discussion: https://postgr.es/m/20220409005910.alw46xqmmgny2sgr%40alap3.anarazel.de
2022-04-17 10:50:19 +12:00
Peter Geoghegan d3609dd254 Fix multi-table VACUUM VERBOSE accounting.
Per-backend global variables like VacuumPageHit are initialized once per
VACUUM command.  This was missed by commit 49c9d9fc, which unified
VACUUM VERBOSE and autovacuum logging.  As a result of that oversight,
incorrect values were shown when multiple relations were processed by a
single VACUUM VERBOSE command.

Relations that happened to be processed later on would show "buffer
usage:" values that incorrectly included buffer accesses made while
processing earlier unrelated relations.  The same accesses were counted
multiple times.

To fix, take initial values for the tracker variables at the start of
heap_vacuum_rel(), and report delta values later on.
2022-04-15 15:48:39 -07:00
Peter Geoghegan bdb71dbe80 VACUUM VERBOSE: Show dead items for an empty table.
Be consistent about the lines that VACUUM VERBOSE outputs by including
an "index scan not needed: " line for completely empty tables. This
makes the output more readable, especially with multiple distinct VACUUM
operations processed by the same VACUUM command.  It's also more
consistent; even empty tables can use the failsafe, which wasn't
reported in the standard way until now.

Follow-up to commit 6e20f460, which taught VACUUM VERBOSE to be more
consistent about reporting on scanned pages with empty tables.
2022-04-15 14:20:56 -07:00
Peter Geoghegan 357c8455e6 Adjust VACUUM's removable cutoff log message.
The age of OldestXmin (a.k.a. "removable cutoff") when VACUUM ends often
indicates the approximate number of XIDs consumed while VACUUM ran.
However, there is at least one important exception: the cutoff could be
held back by a snapshot that was acquired before our VACUUM even began.
Successive VACUUM operations may even use exactly the same old cutoff in
extreme cases involving long held snapshots.

The log messages that described how removable cutoff aged (which were
added by commit 872770fd) created the impression that we were reporting
on how VACUUM's usable cutoff advanced while VACUUM ran, which was
misleading in these extreme cases.  Fix by using a more general wording.

Per gripe from Tom Lane.

In passing, relocate related instrumentation code for clarity.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/1643035.1650035653@sss.pgh.pa.us
2022-04-15 13:21:43 -07:00
Tom Lane 7b7ed046cb Prevent access to no-longer-pinned buffer in heapam_tuple_lock().
heap_fetch() used to have a "keep_buf" parameter that told it to return
ownership of the buffer pin to the caller after finding that the
requested tuple TID exists but is invisible to the specified snapshot.
This was thoughtlessly removed in commit 5db6df0c0, which broke
heapam_tuple_lock() (formerly EvalPlanQualFetch) because that function
needs to do more accesses to the tuple even if it's invisible.  The net
effect is that we would continue to touch the page for a microsecond or
two after releasing pin on the buffer.  Usually no harm would result;
but if a different session decided to defragment the page concurrently,
we could see garbage data and mistakenly conclude that there's no newer
tuple version to chain up to.  (It's hard to say whether this has
happened in the field.  The bug was actually found thanks to a later
change that allowed valgrind to detect accesses to non-pinned buffers.)

The most reasonable way to fix this is to reintroduce keep_buf,
although I made it behave slightly differently: buffer ownership
is passed back only if there is a valid tuple at the requested TID.
In HEAD, we can just add the parameter back to heap_fetch().
To avoid an API break in the back branches, introduce an additional
function heap_fetch_extended() in those branches.

In HEAD there is an additional, less obvious API change: tuple->t_data
will be set to NULL in all cases where buffer ownership is not returned,
in particular when the tuple exists but fails the time qual (and
!keep_buf).  This is to defend against any other callers attempting to
access non-pinned buffers.  We concluded that making that change in back
branches would be more likely to introduce problems than cure any.

In passing, remove a comment about heap_fetch that was obsoleted by
9a8ee1dc6.

Per bug #17462 from Daniil Anisimov.  Back-patch to v12 where the bug
was introduced.

Discussion: https://postgr.es/m/17462-9c98a0f00df9bd36@postgresql.org
2022-04-13 13:35:07 -04:00
Alvaro Herrera 24d2b2680a
Remove extraneous blank lines before block-closing braces
These are useless and distracting.  We wouldn't have written the code
with them to begin with, so there's no reason to keep them.

Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com
Discussion: https://postgr.es/m/attachment/133167/0016-Extraneous-blank-lines.patch
2022-04-13 19:16:02 +02:00
Robert Haas 7fc0e7de9f Revert the addition of GetMaxBackends() and related stuff.
This reverts commits 0147fc7, 4567596, aa64f23, and 5ecd018.
There is no longer agreement that introducing this function
was the right way to address the problem. The consensus now
seems to favor trying to make a correct value for MaxBackends
available to mdules executing their _PG_init() functions.

Nathan Bossart

Discussion: http://postgr.es/m/20220323045229.i23skfscdbvrsuxa@jrouhaud
2022-04-12 14:45:23 -04:00
Tom Lane bd037dc928 Make XLogRecGetBlockTag() throw error if there's no such block.
All but a few existing callers assume without checking that this
function succeeds.  While it probably will, that's a poor excuse for
not checking.  Let's make it return void and instead throw an error
if it doesn't find the block reference.  Callers that actually need
to handle the no-such-block case must now use the underlying function
XLogRecGetBlockTagExtended.

In addition to being a bit less error-prone, this should also serve
to suppress some Coverity complaints about XLogRecGetBlockRefInfo.

While at it, clean up some inconsistency about use of the
XLogRecHasBlockRef macro: make XLogRecGetBlockTagExtended use
that instead of open-coding the same condition, and avoid calling
XLogRecHasBlockRef twice in relevant code paths.  (That is,
calling XLogRecHasBlockRef followed by XLogRecGetBlockTag is now
deprecated: use XLogRecGetBlockTagExtended instead.)

Patch HEAD only; this doesn't seem to have enough value to consider
a back-branch API break.

Discussion: https://postgr.es/m/425039.1649701221@sss.pgh.pa.us
2022-04-11 17:43:53 -04:00
Peter Geoghegan 9debd12348 Remove comment about historic heap vacuuming issue.
Remove comment block about how heap page vacuuming used to set tuples
with storage to LP_UNUSED in a rare edge case that can no longer happen
following commit 8523492d4e.  The comments seem unnecessary now, since
it's now generally clear that heap vacuuming only applies to LP_DEAD
items from VACUUM's first heap pass following more recent work from
commits 12b5ade902 and 4f8d9d1217.
2022-04-11 14:20:46 -07:00