Commit Graph

10 Commits

Author SHA1 Message Date
Michael Paquier 7e61e4cc7c Make two-phase tests of ECPG and main suite more concurrent-proof
The ECPG and main 2PC tests have been using rather-generic names for the
prepared transactions they generate.  This commit switches the 2PC
transactions to use more complex GIDs, reducing the risk of naming
conflicts.

The main 2PC tests also include scans of pg_prepared_xacts that do not
apply filters on the GID of the prepared transactions, making it
possible to fail the test when any 2PC transaction runs concurrently.
The CI has been able to see such failures with an installcheck
running the ECPG and the main regression test suites in parallel.  The
queries on pg_prepared_xacts gain quals to only look after the GIDs
generated locally.

The race is very hard to reproduce, so no backbatch is done for now.

Reported-by: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-mWCGbbE_bne5=AfqjYGDaUZmjCw2+soLjrdNA0xUDFw@mail.gmail.com
2024-04-29 21:10:41 +09:00
Tom Lane 6310809c4a Fix check for conflicting session- vs transaction-level locks.
We have an implementation restriction that PREPARE TRANSACTION can't
handle cases where both session-lifespan and transaction-lifespan locks
are held on the same lockable object.  (That's because we'd otherwise
need to acquire a new PROCLOCK entry during post-prepare cleanup, which
is an operation that might fail.  The situation can only arise with odd
usages of advisory locks, so removing the restriction is probably not
worth the amount of effort it would take.)  AtPrepare_Locks attempted
to enforce this, but its logic was many bricks shy of a load, because
it only detected cases where the session and transaction locks had the
same lockmode.  Locks of different modes on the same object would lead
to the rather unhelpful message "PANIC: we seem to have dropped a bit
somewhere".

To fix, build a transient hashtable with one entry per locktag,
not one per locktag + mode, and use that to detect conflicts.

Per bug #17122 from Alexander Pyhalov.  This bug is ancient,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/17122-04f3c32098a62233@postgresql.org
2021-07-24 18:35:52 -04:00
Tom Lane 1c7a66a8e9 Remove unnecessary dependency on statement_timeout in prepared_xacts test.
Rather than waiting around for statement_timeout to expire, we can just
try to take the table's lock in nowait mode.  This saves some fraction
under 4 seconds when running this test with prepared xacts available,
and it guards against timeout-expired-anyway failures on very slow
machines when prepared xacts are not available, as seen in a recent
failure on axolotl for instance.

This approach could fail if autovacuum were to take an exclusive lock
on the test table concurrently, but there's no reason for it to do so.

Since the main point here is to improve stability in the buildfarm,
back-patch to all supported branches.
2017-03-13 16:46:32 -04:00
Noah Misch 8e3b4d9d40 Eschew "RESET statement_timeout" in tests.
Instead, use transaction abort.  Given an unlucky bout of latency, the
timeout would cancel the RESET itself.  Buildfarm members gharial,
lapwing, mereswine, shearwater, and sungazer witness that.  Back-patch
to 9.1 (all supported versions).  The query_canceled test still could
timeout before entering its subtransaction; for whatever reason, that
has yet to happen on the buildfarm.
2015-10-20 00:37:22 -04:00
Alvaro Herrera 8902f79264 Remove unnecessary and unreliable test 2015-02-20 14:03:49 -03:00
Alvaro Herrera d42358efb1 Have TRUNCATE update pgstat tuple counters
This works by keeping a per-subtransaction record of the ins/upd/del
counters before the truncate, and then resetting them; this record is
useful to return to the previous state in case the truncate is rolled
back, either in a subtransaction or whole transaction.  The state is
propagated upwards as subtransactions commit.

When the per-table data is sent to the stats collector, a flag indicates
to reset the live/dead counters to zero as well.

Catalog version bumped due to the change in pgstat format.

Author: Alexander Shulgin
Discussion: 1007.1207238291@sss.pgh.pa.us
Discussion: 548F7D38.2000401@BlueTreble.com
Reviewed-by: Álvaro Herrera, Jim Nasby
2015-02-20 12:10:01 -03:00
Heikki Linnakangas 928408d9e5 Fix a bug with SSI and prepared transactions:
If there's a dangerous structure T0 ---> T1 ---> T2, and T2 commits first,
we need to abort something. If T2 commits before both conflicts appear,
then it should be caught by OnConflict_CheckForSerializationFailure. If
both conflicts appear before T2 commits, it should be caught by
PreCommit_CheckForSerializationFailure. But that is actually run when
T2 *prepares*. Fix that in OnConflict_CheckForSerializationFailure, by
treating a prepared T2 as if it committed already.

This is mostly a problem for prepared transactions, which are in prepared
state for some time, but also for regular transactions because they also go
through the prepared state in the SSI code for a short moment when they're
committed.

Kevin Grittner and Dan Ports
2011-07-07 18:12:15 +03:00
Heikki Linnakangas 38c0e72180 Adjust the alternative expected output file for prepared_xacts test case,
used when max_prepared_transactions=0, for the recent changes in the test
case.
2011-06-21 17:07:10 +03:00
Heikki Linnakangas dafaa3efb7 Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.

To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.

A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.

Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.

We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.

Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.

Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-08 00:09:08 +02:00
Tom Lane 8d4f2ecd41 Change the default value of max_prepared_transactions to zero, and add
documentation warnings against setting it nonzero unless active use of
prepared transactions is intended and a suitable transaction manager has been
installed.  This should help to prevent the type of scenario we've seen
several times now where a prepared transaction is forgotten and eventually
causes severe maintenance problems (or even anti-wraparound shutdown).

The only real reason we had the default be nonzero in the first place was to
support regression testing of the feature.  To still be able to do that,
tweak pg_regress to force a nonzero value during "make check".  Since we
cannot force a nonzero value in "make installcheck", add a variant regression
test "expected" file that shows the results that will be obtained when
max_prepared_transactions is zero.

Also, extend the HINT messages for transaction wraparound warnings to mention
the possibility that old prepared transactions are causing the problem.

All per today's discussion.
2009-04-23 00:23:46 +00:00