Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
/*
|
|
|
|
* src/test/isolation/isolationtester.c
|
|
|
|
*
|
|
|
|
* isolationtester.c
|
|
|
|
* Runs an isolation test specified by a spec file.
|
|
|
|
*/
|
|
|
|
|
2011-05-09 01:44:12 +02:00
|
|
|
#include "postgres_fe.h"
|
|
|
|
|
2011-07-12 22:35:09 +02:00
|
|
|
#include <sys/time.h>
|
|
|
|
#ifdef HAVE_SYS_SELECT_H
|
|
|
|
#include <sys/select.h>
|
|
|
|
#endif
|
|
|
|
|
2016-02-11 14:36:30 +01:00
|
|
|
#include "datatype/timestamp.h"
|
2019-10-23 06:08:53 +02:00
|
|
|
#include "isolationtester.h"
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
#include "libpq-fe.h"
|
2014-02-15 20:31:30 +01:00
|
|
|
#include "pg_getopt.h"
|
2019-10-23 06:08:53 +02:00
|
|
|
#include "pqexpbuffer.h"
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
2011-07-12 22:35:09 +02:00
|
|
|
#define PREP_WAITING "isolationtester_waiting"
|
|
|
|
|
|
|
|
/*
|
|
|
|
* conns[0] is the global setup, teardown, and watchdog connection. Additional
|
2021-06-23 03:43:12 +02:00
|
|
|
* connections represent spec-defined sessions.
|
2011-07-12 22:35:09 +02:00
|
|
|
*/
|
2021-06-23 03:43:12 +02:00
|
|
|
typedef struct IsoConnInfo
|
|
|
|
{
|
|
|
|
/* The libpq connection object for this connection. */
|
|
|
|
PGconn *conn;
|
|
|
|
/* The backend PID, in numeric and string formats. */
|
|
|
|
int backend_pid;
|
|
|
|
const char *backend_pid_str;
|
|
|
|
/* Name of the associated session. */
|
|
|
|
const char *sessionname;
|
|
|
|
/* Active step on this connection, or NULL if idle. */
|
|
|
|
PermutationStep *active_step;
|
|
|
|
/* Number of NOTICE messages received from connection. */
|
|
|
|
int total_notices;
|
|
|
|
} IsoConnInfo;
|
|
|
|
|
|
|
|
static IsoConnInfo *conns = NULL;
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
static int nconns = 0;
|
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
/* Flag indicating some new NOTICE has arrived */
|
|
|
|
static bool any_new_notice = false;
|
|
|
|
|
2019-12-09 20:31:57 +01:00
|
|
|
/* Maximum time to wait before giving up on a step (in usec) */
|
|
|
|
static int64 max_step_wait = 300 * USECS_PER_SEC;
|
|
|
|
|
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
static void check_testspec(TestSpec *testspec);
|
2011-11-03 18:06:08 +01:00
|
|
|
static void run_testspec(TestSpec *testspec);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
static void run_all_permutations(TestSpec *testspec);
|
2021-06-27 18:45:04 +02:00
|
|
|
static void run_all_permutations_recurse(TestSpec *testspec, int *piles,
|
|
|
|
int nsteps, PermutationStep **steps);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
static void run_named_permutations(TestSpec *testspec);
|
2021-06-23 03:43:12 +02:00
|
|
|
static void run_permutation(TestSpec *testspec, int nsteps,
|
|
|
|
PermutationStep **steps);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
/* Flag bits for try_complete_step(s) */
|
|
|
|
#define STEP_NONBLOCK 0x1 /* return as soon as cmd waits for a lock */
|
2011-07-12 22:35:09 +02:00
|
|
|
#define STEP_RETRY 0x2 /* this is a retry of a previously-waiting cmd */
|
2021-06-23 03:43:12 +02:00
|
|
|
|
|
|
|
static int try_complete_steps(TestSpec *testspec, PermutationStep **waiting,
|
|
|
|
int nwaiting, int flags);
|
|
|
|
static bool try_complete_step(TestSpec *testspec, PermutationStep *pstep,
|
|
|
|
int flags);
|
2011-07-12 22:35:09 +02:00
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
static int step_qsort_cmp(const void *a, const void *b);
|
|
|
|
static int step_bsearch_cmp(const void *a, const void *b);
|
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
static bool step_has_blocker(PermutationStep *pstep);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
static void printResultSet(PGresult *res);
|
2018-11-09 17:08:00 +01:00
|
|
|
static void isotesterNoticeProcessor(void *arg, const char *message);
|
|
|
|
static void blackholeNoticeProcessor(void *arg, const char *message);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
|
|
|
static void
|
2019-01-05 15:05:49 +01:00
|
|
|
disconnect_atexit(void)
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
{
|
|
|
|
int i;
|
2011-04-10 17:42:00 +02:00
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
for (i = 0; i < nconns; i++)
|
2021-06-23 03:43:12 +02:00
|
|
|
if (conns[i].conn)
|
|
|
|
PQfinish(conns[i].conn);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
main(int argc, char **argv)
|
|
|
|
{
|
|
|
|
const char *conninfo;
|
2019-12-09 20:31:57 +01:00
|
|
|
const char *env_wait;
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
TestSpec *testspec;
|
2011-07-12 22:35:09 +02:00
|
|
|
PGresult *res;
|
2011-07-19 20:22:42 +02:00
|
|
|
PQExpBufferData wait_query;
|
2011-11-03 18:06:08 +01:00
|
|
|
int opt;
|
2021-06-23 03:43:12 +02:00
|
|
|
int i;
|
2011-11-03 18:06:08 +01:00
|
|
|
|
2019-08-24 04:35:43 +02:00
|
|
|
while ((opt = getopt(argc, argv, "V")) != -1)
|
2011-11-03 18:06:08 +01:00
|
|
|
{
|
|
|
|
switch (opt)
|
|
|
|
{
|
2013-11-08 20:40:41 +01:00
|
|
|
case 'V':
|
|
|
|
puts("isolationtester (PostgreSQL) " PG_VERSION);
|
|
|
|
exit(0);
|
2011-11-03 18:06:08 +01:00
|
|
|
default:
|
2019-08-24 04:45:05 +02:00
|
|
|
fprintf(stderr, "Usage: isolationtester [CONNINFO]\n");
|
2011-11-03 18:06:08 +01:00
|
|
|
return EXIT_FAILURE;
|
|
|
|
}
|
|
|
|
}
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
2013-12-20 02:09:30 +01:00
|
|
|
/*
|
|
|
|
* Make stdout unbuffered to match stderr; and ensure stderr is unbuffered
|
|
|
|
* too, which it should already be everywhere except sometimes in Windows.
|
|
|
|
*/
|
2013-12-19 21:26:27 +01:00
|
|
|
setbuf(stdout, NULL);
|
2013-12-20 02:09:30 +01:00
|
|
|
setbuf(stderr, NULL);
|
2013-12-19 21:26:27 +01:00
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
/*
|
2011-11-03 18:06:08 +01:00
|
|
|
* If the user supplies a non-option parameter on the command line, use it
|
|
|
|
* as the conninfo string; otherwise default to setting dbname=postgres
|
|
|
|
* and using environment variables or defaults for all other connection
|
|
|
|
* parameters.
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
*/
|
2011-11-03 18:06:08 +01:00
|
|
|
if (argc > optind)
|
|
|
|
conninfo = argv[optind];
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
else
|
|
|
|
conninfo = "dbname = postgres";
|
|
|
|
|
2019-12-09 20:31:57 +01:00
|
|
|
/*
|
|
|
|
* If PGISOLATIONTIMEOUT is set in the environment, adopt its value (given
|
|
|
|
* in seconds) as the max time to wait for any one step to complete.
|
|
|
|
*/
|
|
|
|
env_wait = getenv("PGISOLATIONTIMEOUT");
|
|
|
|
if (env_wait != NULL)
|
|
|
|
max_step_wait = ((int64) atoi(env_wait)) * USECS_PER_SEC;
|
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
/* Read the test spec from stdin */
|
|
|
|
spec_yyparse();
|
|
|
|
testspec = &parseresult;
|
2011-11-03 18:06:08 +01:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
/* Perform post-parse checking, and fill in linking fields */
|
|
|
|
check_testspec(testspec);
|
2015-08-15 04:09:27 +02:00
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
printf("Parsed test spec with %d sessions\n", testspec->nsessions);
|
|
|
|
|
2011-07-12 22:35:09 +02:00
|
|
|
/*
|
|
|
|
* Establish connections to the database, one for each session and an
|
|
|
|
* extra for lock wait detection and global work.
|
|
|
|
*/
|
|
|
|
nconns = 1 + testspec->nsessions;
|
2021-06-23 03:43:12 +02:00
|
|
|
conns = (IsoConnInfo *) pg_malloc0(nconns * sizeof(IsoConnInfo));
|
2019-01-05 15:05:49 +01:00
|
|
|
atexit(disconnect_atexit);
|
2019-07-28 18:02:27 +02:00
|
|
|
|
2011-07-12 22:35:09 +02:00
|
|
|
for (i = 0; i < nconns; i++)
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
if (i == 0)
|
|
|
|
conns[i].sessionname = "control connection";
|
|
|
|
else
|
|
|
|
conns[i].sessionname = testspec->sessions[i - 1]->name;
|
|
|
|
|
|
|
|
conns[i].conn = PQconnectdb(conninfo);
|
|
|
|
if (PQstatus(conns[i].conn) != CONNECTION_OK)
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
{
|
2021-01-22 22:52:31 +01:00
|
|
|
fprintf(stderr, "Connection %d failed: %s",
|
2021-06-23 03:43:12 +02:00
|
|
|
i, PQerrorMessage(conns[i].conn));
|
2019-01-05 15:05:49 +01:00
|
|
|
exit(1);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
}
|
|
|
|
|
2018-11-09 17:08:00 +01:00
|
|
|
/*
|
|
|
|
* Set up notice processors for the user-defined connections, so that
|
|
|
|
* messages can get printed prefixed with the session names. The
|
|
|
|
* control connection gets a "blackhole" processor instead (hides all
|
|
|
|
* messages).
|
|
|
|
*/
|
|
|
|
if (i != 0)
|
2021-06-23 03:43:12 +02:00
|
|
|
PQsetNoticeProcessor(conns[i].conn,
|
2018-11-09 17:08:00 +01:00
|
|
|
isotesterNoticeProcessor,
|
2021-06-23 03:43:12 +02:00
|
|
|
(void *) &conns[i]);
|
2018-11-09 17:08:00 +01:00
|
|
|
else
|
2021-06-23 03:43:12 +02:00
|
|
|
PQsetNoticeProcessor(conns[i].conn,
|
2018-11-09 17:08:00 +01:00
|
|
|
blackholeNoticeProcessor,
|
|
|
|
NULL);
|
|
|
|
|
2019-07-28 18:02:27 +02:00
|
|
|
/* Save each connection's backend PID for subsequent use. */
|
2021-06-23 03:43:12 +02:00
|
|
|
conns[i].backend_pid = PQbackendPID(conns[i].conn);
|
|
|
|
conns[i].backend_pid_str = psprintf("%d", conns[i].backend_pid);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
}
|
|
|
|
|
2011-07-19 20:22:42 +02:00
|
|
|
/*
|
|
|
|
* Build the query we'll use to detect lock contention among sessions in
|
|
|
|
* the test specification. Most of the time, we could get away with
|
|
|
|
* simply checking whether a session is waiting for *any* lock: we don't
|
|
|
|
* exactly expect concurrent use of test tables. However, autovacuum will
|
|
|
|
* occasionally take AccessExclusiveLock to truncate a table, and we must
|
|
|
|
* ignore that transient wait.
|
|
|
|
*/
|
|
|
|
initPQExpBuffer(&wait_query);
|
|
|
|
appendPQExpBufferStr(&wait_query,
|
2017-04-10 16:26:54 +02:00
|
|
|
"SELECT pg_catalog.pg_isolation_test_session_is_blocked($1, '{");
|
2011-07-19 20:22:42 +02:00
|
|
|
/* The spec syntax requires at least one session; assume that here. */
|
2021-06-23 03:43:12 +02:00
|
|
|
appendPQExpBufferStr(&wait_query, conns[1].backend_pid_str);
|
2011-07-19 20:22:42 +02:00
|
|
|
for (i = 2; i < nconns; i++)
|
2021-06-23 03:43:12 +02:00
|
|
|
appendPQExpBuffer(&wait_query, ",%s", conns[i].backend_pid_str);
|
2017-04-10 16:26:54 +02:00
|
|
|
appendPQExpBufferStr(&wait_query, "}')");
|
2017-04-05 17:04:36 +02:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
res = PQprepare(conns[0].conn, PREP_WAITING, wait_query.data, 0, NULL);
|
2011-07-12 22:35:09 +02:00
|
|
|
if (PQresultStatus(res) != PGRES_COMMAND_OK)
|
|
|
|
{
|
|
|
|
fprintf(stderr, "prepare of lock wait query failed: %s",
|
2021-06-23 03:43:12 +02:00
|
|
|
PQerrorMessage(conns[0].conn));
|
2019-01-05 15:05:49 +01:00
|
|
|
exit(1);
|
2011-07-12 22:35:09 +02:00
|
|
|
}
|
|
|
|
PQclear(res);
|
2011-07-19 20:22:42 +02:00
|
|
|
termPQExpBuffer(&wait_query);
|
2011-07-12 22:35:09 +02:00
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
/*
|
|
|
|
* Run the permutations specified in the spec, or all if none were
|
|
|
|
* explicitly specified.
|
|
|
|
*/
|
2011-11-03 18:06:08 +01:00
|
|
|
run_testspec(testspec);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
/*
|
|
|
|
* Validity-check the test spec and fill in cross-links between nodes.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
check_testspec(TestSpec *testspec)
|
|
|
|
{
|
|
|
|
int nallsteps;
|
|
|
|
Step **allsteps;
|
|
|
|
int i,
|
|
|
|
j,
|
|
|
|
k;
|
|
|
|
|
|
|
|
/* Create a sorted lookup table of all steps. */
|
|
|
|
nallsteps = 0;
|
|
|
|
for (i = 0; i < testspec->nsessions; i++)
|
|
|
|
nallsteps += testspec->sessions[i]->nsteps;
|
|
|
|
|
|
|
|
allsteps = pg_malloc(nallsteps * sizeof(Step *));
|
|
|
|
|
|
|
|
k = 0;
|
|
|
|
for (i = 0; i < testspec->nsessions; i++)
|
|
|
|
{
|
|
|
|
for (j = 0; j < testspec->sessions[i]->nsteps; j++)
|
|
|
|
allsteps[k++] = testspec->sessions[i]->steps[j];
|
|
|
|
}
|
|
|
|
|
|
|
|
qsort(allsteps, nallsteps, sizeof(Step *), step_qsort_cmp);
|
|
|
|
|
|
|
|
/* Verify that all step names are unique. */
|
|
|
|
for (i = 1; i < nallsteps; i++)
|
|
|
|
{
|
|
|
|
if (strcmp(allsteps[i - 1]->name,
|
|
|
|
allsteps[i]->name) == 0)
|
|
|
|
{
|
|
|
|
fprintf(stderr, "duplicate step name: %s\n",
|
|
|
|
allsteps[i]->name);
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Set the session index fields in steps. */
|
|
|
|
for (i = 0; i < testspec->nsessions; i++)
|
|
|
|
{
|
|
|
|
Session *session = testspec->sessions[i];
|
|
|
|
|
|
|
|
for (j = 0; j < session->nsteps; j++)
|
|
|
|
session->steps[j]->session = i;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we have manually-specified permutations, link PermutationSteps to
|
|
|
|
* Steps, and fill in blocker links.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < testspec->npermutations; i++)
|
|
|
|
{
|
|
|
|
Permutation *p = testspec->permutations[i];
|
|
|
|
|
|
|
|
for (j = 0; j < p->nsteps; j++)
|
|
|
|
{
|
|
|
|
PermutationStep *pstep = p->steps[j];
|
|
|
|
Step **this = (Step **) bsearch(pstep->name,
|
|
|
|
allsteps,
|
|
|
|
nallsteps,
|
|
|
|
sizeof(Step *),
|
|
|
|
step_bsearch_cmp);
|
|
|
|
|
|
|
|
if (this == NULL)
|
|
|
|
{
|
|
|
|
fprintf(stderr, "undefined step \"%s\" specified in permutation\n",
|
|
|
|
pstep->name);
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
pstep->step = *this;
|
|
|
|
|
|
|
|
/* Mark the step used, for check below */
|
|
|
|
pstep->step->used = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Identify any blocker steps. We search only the current
|
|
|
|
* permutation, since steps not used there couldn't be concurrent.
|
|
|
|
* Note that it's OK to reference later permutation steps, so this
|
|
|
|
* can't be combined with the previous loop.
|
|
|
|
*/
|
|
|
|
for (j = 0; j < p->nsteps; j++)
|
|
|
|
{
|
|
|
|
PermutationStep *pstep = p->steps[j];
|
|
|
|
|
|
|
|
for (k = 0; k < pstep->nblockers; k++)
|
|
|
|
{
|
|
|
|
PermutationStepBlocker *blocker = pstep->blockers[k];
|
|
|
|
int n;
|
|
|
|
|
|
|
|
if (blocker->blocktype == PSB_ONCE)
|
|
|
|
continue; /* nothing to link to */
|
|
|
|
|
|
|
|
blocker->step = NULL;
|
|
|
|
for (n = 0; n < p->nsteps; n++)
|
|
|
|
{
|
|
|
|
PermutationStep *otherp = p->steps[n];
|
|
|
|
|
|
|
|
if (strcmp(otherp->name, blocker->stepname) == 0)
|
|
|
|
{
|
|
|
|
blocker->step = otherp->step;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (blocker->step == NULL)
|
|
|
|
{
|
|
|
|
fprintf(stderr, "undefined blocking step \"%s\" referenced in permutation step \"%s\"\n",
|
|
|
|
blocker->stepname, pstep->name);
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
/* can't block on completion of step of own session */
|
|
|
|
if (blocker->step->session == pstep->step->session)
|
|
|
|
{
|
|
|
|
fprintf(stderr, "permutation step \"%s\" cannot block on its own session\n",
|
|
|
|
pstep->name);
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we have manually-specified permutations, verify that all steps have
|
|
|
|
* been used, warning about anything defined but not used. We can skip
|
|
|
|
* this when using automatically-generated permutations.
|
|
|
|
*/
|
|
|
|
if (testspec->permutations)
|
|
|
|
{
|
|
|
|
for (i = 0; i < nallsteps; i++)
|
|
|
|
{
|
|
|
|
if (!allsteps[i]->used)
|
|
|
|
fprintf(stderr, "unused step name: %s\n", allsteps[i]->name);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-06-27 18:45:04 +02:00
|
|
|
free(allsteps);
|
|
|
|
}
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
2011-11-03 18:06:08 +01:00
|
|
|
/*
|
|
|
|
* Run the permutations specified in the spec, or all if none were
|
|
|
|
* explicitly specified.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
run_testspec(TestSpec *testspec)
|
|
|
|
{
|
|
|
|
if (testspec->permutations)
|
|
|
|
run_named_permutations(testspec);
|
|
|
|
else
|
|
|
|
run_all_permutations(testspec);
|
|
|
|
}
|
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
/*
|
|
|
|
* Run all permutations of the steps and sessions.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
run_all_permutations(TestSpec *testspec)
|
|
|
|
{
|
|
|
|
int nsteps;
|
|
|
|
int i;
|
2021-06-23 03:43:12 +02:00
|
|
|
PermutationStep *steps;
|
|
|
|
PermutationStep **stepptrs;
|
2021-06-27 18:45:04 +02:00
|
|
|
int *piles;
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
|
|
|
/* Count the total number of steps in all sessions */
|
|
|
|
nsteps = 0;
|
|
|
|
for (i = 0; i < testspec->nsessions; i++)
|
|
|
|
nsteps += testspec->sessions[i]->nsteps;
|
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
/* Create PermutationStep workspace array */
|
|
|
|
steps = (PermutationStep *) pg_malloc0(sizeof(PermutationStep) * nsteps);
|
|
|
|
stepptrs = (PermutationStep **) pg_malloc(sizeof(PermutationStep *) * nsteps);
|
|
|
|
for (i = 0; i < nsteps; i++)
|
|
|
|
stepptrs[i] = steps + i;
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* To generate the permutations, we conceptually put the steps of each
|
2011-09-21 18:26:19 +02:00
|
|
|
* session on a pile. To generate a permutation, we pick steps from the
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
* piles until all piles are empty. By picking steps from piles in
|
|
|
|
* different order, we get different permutations.
|
|
|
|
*
|
|
|
|
* A pile is actually just an integer which tells how many steps we've
|
|
|
|
* already picked from this pile.
|
|
|
|
*/
|
2016-08-31 00:22:43 +02:00
|
|
|
piles = pg_malloc(sizeof(int) * testspec->nsessions);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
for (i = 0; i < testspec->nsessions; i++)
|
|
|
|
piles[i] = 0;
|
|
|
|
|
2021-06-27 18:45:04 +02:00
|
|
|
run_all_permutations_recurse(testspec, piles, 0, stepptrs);
|
|
|
|
|
|
|
|
free(steps);
|
|
|
|
free(stepptrs);
|
|
|
|
free(piles);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2021-06-27 18:45:04 +02:00
|
|
|
run_all_permutations_recurse(TestSpec *testspec, int *piles,
|
|
|
|
int nsteps, PermutationStep **steps)
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
{
|
|
|
|
int i;
|
2021-06-23 03:43:12 +02:00
|
|
|
bool found = false;
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
|
|
|
for (i = 0; i < testspec->nsessions; i++)
|
|
|
|
{
|
|
|
|
/* If there's any more steps in this pile, pick it and recurse */
|
|
|
|
if (piles[i] < testspec->sessions[i]->nsteps)
|
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
Step *newstep = testspec->sessions[i]->steps[piles[i]];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* These automatically-generated PermutationSteps never have
|
|
|
|
* blocker conditions. So we need only fill these fields, relying
|
|
|
|
* on run_all_permutations() to have zeroed the rest:
|
|
|
|
*/
|
|
|
|
steps[nsteps]->name = newstep->name;
|
|
|
|
steps[nsteps]->step = newstep;
|
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
piles[i]++;
|
|
|
|
|
2021-06-27 18:45:04 +02:00
|
|
|
run_all_permutations_recurse(testspec, piles, nsteps + 1, steps);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
|
|
|
piles[i]--;
|
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
found = true;
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If all the piles were empty, this permutation is completed. Run it */
|
|
|
|
if (!found)
|
|
|
|
run_permutation(testspec, nsteps, steps);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Run permutations given in the test spec
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
run_named_permutations(TestSpec *testspec)
|
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
int i;
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
|
|
|
for (i = 0; i < testspec->npermutations; i++)
|
|
|
|
{
|
|
|
|
Permutation *p = testspec->permutations[i];
|
2012-01-28 23:55:08 +01:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
run_permutation(testspec, p->nsteps, p->steps);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
step_qsort_cmp(const void *a, const void *b)
|
|
|
|
{
|
|
|
|
Step *stepa = *((Step **) a);
|
|
|
|
Step *stepb = *((Step **) b);
|
|
|
|
|
|
|
|
return strcmp(stepa->name, stepb->name);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
step_bsearch_cmp(const void *a, const void *b)
|
|
|
|
{
|
|
|
|
char *stepname = (char *) a;
|
|
|
|
Step *step = *((Step **) b);
|
|
|
|
|
|
|
|
return strcmp(stepname, step->name);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Run one permutation
|
|
|
|
*/
|
|
|
|
static void
|
2021-06-23 03:43:12 +02:00
|
|
|
run_permutation(TestSpec *testspec, int nsteps, PermutationStep **steps)
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
{
|
|
|
|
PGresult *res;
|
|
|
|
int i;
|
2016-02-11 14:36:30 +01:00
|
|
|
int nwaiting = 0;
|
2021-06-23 03:43:12 +02:00
|
|
|
PermutationStep **waiting;
|
2016-02-11 14:36:30 +01:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
waiting = pg_malloc(sizeof(PermutationStep *) * testspec->nsessions);
|
2016-02-11 17:30:46 +01:00
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
printf("\nstarting permutation:");
|
|
|
|
for (i = 0; i < nsteps; i++)
|
|
|
|
printf(" %s", steps[i]->name);
|
|
|
|
printf("\n");
|
|
|
|
|
|
|
|
/* Perform setup */
|
2012-09-05 02:31:06 +02:00
|
|
|
for (i = 0; i < testspec->nsetupsqls; i++)
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
res = PQexec(conns[0].conn, testspec->setupsqls[i]);
|
2013-10-04 15:32:48 +02:00
|
|
|
if (PQresultStatus(res) == PGRES_TUPLES_OK)
|
|
|
|
{
|
|
|
|
printResultSet(res);
|
|
|
|
}
|
|
|
|
else if (PQresultStatus(res) != PGRES_COMMAND_OK)
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
fprintf(stderr, "setup failed: %s", PQerrorMessage(conns[0].conn));
|
2019-01-05 15:05:49 +01:00
|
|
|
exit(1);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
}
|
|
|
|
PQclear(res);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Perform per-session setup */
|
|
|
|
for (i = 0; i < testspec->nsessions; i++)
|
|
|
|
{
|
|
|
|
if (testspec->sessions[i]->setupsql)
|
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
res = PQexec(conns[i + 1].conn, testspec->sessions[i]->setupsql);
|
2011-08-18 12:36:37 +02:00
|
|
|
if (PQresultStatus(res) == PGRES_TUPLES_OK)
|
|
|
|
{
|
|
|
|
printResultSet(res);
|
|
|
|
}
|
|
|
|
else if (PQresultStatus(res) != PGRES_COMMAND_OK)
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
{
|
|
|
|
fprintf(stderr, "setup of session %s failed: %s",
|
2021-06-23 03:43:12 +02:00
|
|
|
conns[i + 1].sessionname,
|
|
|
|
PQerrorMessage(conns[i + 1].conn));
|
2019-01-05 15:05:49 +01:00
|
|
|
exit(1);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
}
|
|
|
|
PQclear(res);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Perform steps */
|
|
|
|
for (i = 0; i < nsteps; i++)
|
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
PermutationStep *pstep = steps[i];
|
|
|
|
Step *step = pstep->step;
|
|
|
|
IsoConnInfo *iconn = &conns[1 + step->session];
|
|
|
|
PGconn *conn = iconn->conn;
|
2016-02-11 14:36:30 +01:00
|
|
|
bool mustwait;
|
2021-06-23 03:43:12 +02:00
|
|
|
int j;
|
2011-04-10 17:42:00 +02:00
|
|
|
|
2016-02-11 14:36:30 +01:00
|
|
|
/*
|
2016-02-12 19:36:13 +01:00
|
|
|
* Check whether the session that needs to perform the next step is
|
|
|
|
* still blocked on an earlier step. If so, wait for it to finish.
|
2016-02-11 14:36:30 +01:00
|
|
|
*/
|
2021-06-23 03:43:12 +02:00
|
|
|
if (iconn->active_step != NULL)
|
2012-01-14 23:36:39 +01:00
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
struct timeval start_time;
|
2016-02-12 20:10:36 +01:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
gettimeofday(&start_time, NULL);
|
2016-02-12 20:10:36 +01:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
while (iconn->active_step != NULL)
|
2012-01-14 23:36:39 +01:00
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
PermutationStep *oldstep = iconn->active_step;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for oldstep. But even though we don't use
|
|
|
|
* STEP_NONBLOCK, it might not complete because of blocker
|
|
|
|
* conditions.
|
|
|
|
*/
|
|
|
|
if (!try_complete_step(testspec, oldstep, STEP_RETRY))
|
2016-02-11 14:36:30 +01:00
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
/* Done, so remove oldstep from the waiting[] array. */
|
|
|
|
int w;
|
|
|
|
|
|
|
|
for (w = 0; w < nwaiting; w++)
|
|
|
|
{
|
|
|
|
if (oldstep == waiting[w])
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (w >= nwaiting)
|
|
|
|
abort(); /* can't happen */
|
2016-02-11 14:36:30 +01:00
|
|
|
if (w + 1 < nwaiting)
|
2016-04-28 00:19:28 +02:00
|
|
|
memmove(&waiting[w], &waiting[w + 1],
|
2021-06-23 03:43:12 +02:00
|
|
|
(nwaiting - (w + 1)) * sizeof(PermutationStep *));
|
2016-02-11 14:36:30 +01:00
|
|
|
nwaiting--;
|
|
|
|
}
|
2012-01-14 23:36:39 +01:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
/*
|
|
|
|
* Check for other steps that have finished. We should do
|
|
|
|
* this if oldstep completed, as it might have unblocked
|
|
|
|
* something. On the other hand, if oldstep hasn't completed,
|
|
|
|
* we must poll all the active steps in hopes of unblocking
|
|
|
|
* oldstep. So either way, poll them.
|
|
|
|
*/
|
|
|
|
nwaiting = try_complete_steps(testspec, waiting, nwaiting,
|
|
|
|
STEP_NONBLOCK | STEP_RETRY);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the target session is still busy, apply a timeout to
|
|
|
|
* keep from hanging indefinitely, which could happen with
|
|
|
|
* incorrect blocker annotations. Use the same 2 *
|
|
|
|
* max_step_wait limit as try_complete_step does for deciding
|
|
|
|
* to die. (We don't bother with trying to cancel anything,
|
|
|
|
* since it's unclear what to cancel in this case.)
|
|
|
|
*/
|
|
|
|
if (iconn->active_step != NULL)
|
|
|
|
{
|
|
|
|
struct timeval current_time;
|
|
|
|
int64 td;
|
|
|
|
|
|
|
|
gettimeofday(¤t_time, NULL);
|
|
|
|
td = (int64) current_time.tv_sec - (int64) start_time.tv_sec;
|
|
|
|
td *= USECS_PER_SEC;
|
|
|
|
td += (int64) current_time.tv_usec - (int64) start_time.tv_usec;
|
|
|
|
if (td > 2 * max_step_wait)
|
|
|
|
{
|
|
|
|
fprintf(stderr, "step %s timed out after %d seconds\n",
|
|
|
|
iconn->active_step->name,
|
|
|
|
(int) (td / USECS_PER_SEC));
|
|
|
|
fprintf(stderr, "active steps are:");
|
|
|
|
for (j = 1; j < nconns; j++)
|
|
|
|
{
|
|
|
|
IsoConnInfo *oconn = &conns[j];
|
|
|
|
|
|
|
|
if (oconn->active_step != NULL)
|
|
|
|
fprintf(stderr, " %s",
|
|
|
|
oconn->active_step->name);
|
|
|
|
}
|
|
|
|
fprintf(stderr, "\n");
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2012-01-14 23:36:39 +01:00
|
|
|
}
|
|
|
|
|
2016-02-11 14:36:30 +01:00
|
|
|
/* Send the query for this step. */
|
2012-01-14 23:36:39 +01:00
|
|
|
if (!PQsendQuery(conn, step->sql))
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
{
|
2011-10-25 03:11:10 +02:00
|
|
|
fprintf(stdout, "failed to send query for step %s: %s\n",
|
2018-10-17 21:06:38 +02:00
|
|
|
step->name, PQerrorMessage(conn));
|
2019-01-05 15:05:49 +01:00
|
|
|
exit(1);
|
2011-07-12 22:35:09 +02:00
|
|
|
}
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
/* Remember we launched a step. */
|
|
|
|
iconn->active_step = pstep;
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
/* Remember target number of NOTICEs for any blocker conditions. */
|
|
|
|
for (j = 0; j < pstep->nblockers; j++)
|
2016-02-11 14:36:30 +01:00
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
PermutationStepBlocker *blocker = pstep->blockers[j];
|
|
|
|
|
|
|
|
if (blocker->blocktype == PSB_NUM_NOTICES)
|
|
|
|
blocker->target_notices = blocker->num_notices +
|
|
|
|
conns[blocker->step->session + 1].total_notices;
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
}
|
2016-02-11 14:36:30 +01:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
/* Try to complete this step without blocking. */
|
|
|
|
mustwait = try_complete_step(testspec, pstep, STEP_NONBLOCK);
|
|
|
|
|
|
|
|
/* Check for completion of any steps that were previously waiting. */
|
|
|
|
nwaiting = try_complete_steps(testspec, waiting, nwaiting,
|
|
|
|
STEP_NONBLOCK | STEP_RETRY);
|
2016-02-11 14:36:30 +01:00
|
|
|
|
|
|
|
/* If this step is waiting, add it to the array of waiters. */
|
|
|
|
if (mustwait)
|
2021-06-23 03:43:12 +02:00
|
|
|
waiting[nwaiting++] = pstep;
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
}
|
|
|
|
|
2016-02-12 20:10:36 +01:00
|
|
|
/* Wait for any remaining queries. */
|
2021-06-23 03:43:12 +02:00
|
|
|
nwaiting = try_complete_steps(testspec, waiting, nwaiting, STEP_RETRY);
|
|
|
|
if (nwaiting != 0)
|
2011-09-27 21:08:31 +02:00
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
fprintf(stderr, "failed to complete permutation due to mutually-blocking steps\n");
|
|
|
|
exit(1);
|
2011-09-27 21:08:31 +02:00
|
|
|
}
|
2011-07-12 22:35:09 +02:00
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
/* Perform per-session teardown */
|
|
|
|
for (i = 0; i < testspec->nsessions; i++)
|
|
|
|
{
|
|
|
|
if (testspec->sessions[i]->teardownsql)
|
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
res = PQexec(conns[i + 1].conn, testspec->sessions[i]->teardownsql);
|
2013-10-04 15:32:48 +02:00
|
|
|
if (PQresultStatus(res) == PGRES_TUPLES_OK)
|
|
|
|
{
|
|
|
|
printResultSet(res);
|
|
|
|
}
|
|
|
|
else if (PQresultStatus(res) != PGRES_COMMAND_OK)
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
{
|
|
|
|
fprintf(stderr, "teardown of session %s failed: %s",
|
2021-06-23 03:43:12 +02:00
|
|
|
conns[i + 1].sessionname,
|
|
|
|
PQerrorMessage(conns[i + 1].conn));
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
/* don't exit on teardown failure */
|
|
|
|
}
|
|
|
|
PQclear(res);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Perform teardown */
|
|
|
|
if (testspec->teardownsql)
|
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
res = PQexec(conns[0].conn, testspec->teardownsql);
|
2011-08-18 12:36:37 +02:00
|
|
|
if (PQresultStatus(res) == PGRES_TUPLES_OK)
|
|
|
|
{
|
|
|
|
printResultSet(res);
|
|
|
|
}
|
|
|
|
else if (PQresultStatus(res) != PGRES_COMMAND_OK)
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
{
|
|
|
|
fprintf(stderr, "teardown failed: %s",
|
2021-06-23 03:43:12 +02:00
|
|
|
PQerrorMessage(conns[0].conn));
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
/* don't exit on teardown failure */
|
|
|
|
}
|
|
|
|
PQclear(res);
|
|
|
|
}
|
2016-02-11 17:30:46 +01:00
|
|
|
|
|
|
|
free(waiting);
|
2021-06-23 03:43:12 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check for completion of any waiting step(s).
|
|
|
|
* Remove completed ones from the waiting[] array,
|
|
|
|
* and return the new value of nwaiting.
|
|
|
|
* See try_complete_step for the meaning of the flags.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
try_complete_steps(TestSpec *testspec, PermutationStep **waiting,
|
|
|
|
int nwaiting, int flags)
|
|
|
|
{
|
|
|
|
int old_nwaiting;
|
|
|
|
bool have_blocker;
|
|
|
|
|
|
|
|
do
|
|
|
|
{
|
|
|
|
int w = 0;
|
|
|
|
|
|
|
|
/* Reset latch; we only care about notices received within loop. */
|
|
|
|
any_new_notice = false;
|
|
|
|
|
|
|
|
/* Likewise, these variables reset for each retry. */
|
|
|
|
old_nwaiting = nwaiting;
|
|
|
|
have_blocker = false;
|
|
|
|
|
|
|
|
/* Scan the array, try to complete steps. */
|
|
|
|
while (w < nwaiting)
|
|
|
|
{
|
|
|
|
if (try_complete_step(testspec, waiting[w], flags))
|
|
|
|
{
|
|
|
|
/* Still blocked, leave it alone. */
|
|
|
|
if (waiting[w]->nblockers > 0)
|
|
|
|
have_blocker = true;
|
|
|
|
w++;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Done, remove it from array. */
|
|
|
|
if (w + 1 < nwaiting)
|
|
|
|
memmove(&waiting[w], &waiting[w + 1],
|
|
|
|
(nwaiting - (w + 1)) * sizeof(PermutationStep *));
|
|
|
|
nwaiting--;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If any of the still-waiting steps have blocker conditions attached,
|
|
|
|
* it's possible that one of the steps we examined afterwards has
|
|
|
|
* released them (either by completing, or by sending a NOTICE). If
|
|
|
|
* any step completions or NOTICEs happened, repeat the loop until
|
|
|
|
* none occurs. Without this provision, completion timing could vary
|
|
|
|
* depending on the order in which the steps appear in the array.
|
|
|
|
*/
|
|
|
|
} while (have_blocker && (nwaiting < old_nwaiting || any_new_notice));
|
|
|
|
return nwaiting;
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
}
|
|
|
|
|
2011-07-12 22:35:09 +02:00
|
|
|
/*
|
|
|
|
* Our caller already sent the query associated with this step. Wait for it
|
2021-06-23 03:43:12 +02:00
|
|
|
* to either complete, or hit a blocking condition.
|
2011-07-12 22:35:09 +02:00
|
|
|
*
|
|
|
|
* When calling this function on behalf of a given step for a second or later
|
2021-06-23 03:43:12 +02:00
|
|
|
* time, pass the STEP_RETRY flag. Do not pass it on the first call.
|
2011-09-27 21:08:31 +02:00
|
|
|
*
|
2021-06-23 03:43:12 +02:00
|
|
|
* Returns true if the step was *not* completed, false if it was completed.
|
|
|
|
* Reasons for non-completion are (a) the STEP_NONBLOCK flag was specified
|
|
|
|
* and the query is waiting to acquire a lock, or (b) the step has an
|
|
|
|
* unsatisfied blocker condition. When STEP_NONBLOCK is given, we assume
|
|
|
|
* that any lock wait will persist until we have executed additional steps.
|
2011-07-12 22:35:09 +02:00
|
|
|
*/
|
|
|
|
static bool
|
2021-06-23 03:43:12 +02:00
|
|
|
try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
|
2011-07-12 22:35:09 +02:00
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
Step *step = pstep->step;
|
|
|
|
IsoConnInfo *iconn = &conns[1 + step->session];
|
|
|
|
PGconn *conn = iconn->conn;
|
2011-07-12 22:35:09 +02:00
|
|
|
fd_set read_set;
|
2016-02-11 14:36:30 +01:00
|
|
|
struct timeval start_time;
|
2011-07-12 22:35:09 +02:00
|
|
|
struct timeval timeout;
|
|
|
|
int sock = PQsocket(conn);
|
|
|
|
int ret;
|
|
|
|
PGresult *res;
|
2019-07-28 18:02:27 +02:00
|
|
|
PGnotify *notify;
|
2016-02-11 14:36:30 +01:00
|
|
|
bool canceled = false;
|
2011-07-12 22:35:09 +02:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
/*
|
|
|
|
* If the step is annotated with (*), then on the first call, force it to
|
|
|
|
* wait. This is useful for ensuring consistent output when the step
|
|
|
|
* might or might not complete so fast that we don't observe it waiting.
|
|
|
|
*/
|
|
|
|
if (!(flags & STEP_RETRY))
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < pstep->nblockers; i++)
|
|
|
|
{
|
|
|
|
PermutationStepBlocker *blocker = pstep->blockers[i];
|
|
|
|
|
|
|
|
if (blocker->blocktype == PSB_ONCE)
|
|
|
|
{
|
|
|
|
printf("step %s: %s <waiting ...>\n",
|
|
|
|
step->name, step->sql);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-03-09 03:10:33 +01:00
|
|
|
if (sock < 0)
|
|
|
|
{
|
|
|
|
fprintf(stderr, "invalid socket: %s", PQerrorMessage(conn));
|
2019-01-05 15:05:49 +01:00
|
|
|
exit(1);
|
2016-03-09 03:10:33 +01:00
|
|
|
}
|
|
|
|
|
2016-02-11 14:36:30 +01:00
|
|
|
gettimeofday(&start_time, NULL);
|
2011-07-12 22:35:09 +02:00
|
|
|
FD_ZERO(&read_set);
|
|
|
|
|
2016-02-11 14:36:30 +01:00
|
|
|
while (PQisBusy(conn))
|
2011-07-12 22:35:09 +02:00
|
|
|
{
|
|
|
|
FD_SET(sock, &read_set);
|
|
|
|
timeout.tv_sec = 0;
|
|
|
|
timeout.tv_usec = 10000; /* Check for lock waits every 10ms. */
|
|
|
|
|
|
|
|
ret = select(sock + 1, &read_set, NULL, NULL, &timeout);
|
|
|
|
if (ret < 0) /* error in select() */
|
|
|
|
{
|
2013-04-07 04:28:49 +02:00
|
|
|
if (errno == EINTR)
|
|
|
|
continue;
|
2011-07-12 22:35:09 +02:00
|
|
|
fprintf(stderr, "select failed: %s\n", strerror(errno));
|
2019-01-05 15:05:49 +01:00
|
|
|
exit(1);
|
2011-07-12 22:35:09 +02:00
|
|
|
}
|
|
|
|
else if (ret == 0) /* select() timeout: check for lock wait */
|
|
|
|
{
|
2016-02-11 14:36:30 +01:00
|
|
|
struct timeval current_time;
|
|
|
|
int64 td;
|
2011-07-12 22:35:09 +02:00
|
|
|
|
2016-02-11 14:36:30 +01:00
|
|
|
/* If it's OK for the step to block, check whether it has. */
|
|
|
|
if (flags & STEP_NONBLOCK)
|
2011-07-12 22:35:09 +02:00
|
|
|
{
|
Create a function to reliably identify which sessions block which others.
This patch introduces "pg_blocking_pids(int) returns int[]", which returns
the PIDs of any sessions that are blocking the session with the given PID.
Historically people have obtained such information using a self-join on
the pg_locks view, but it's unreasonably tedious to do it that way with any
modicum of correctness, and the addition of parallel queries has pretty
much broken that approach altogether. (Given some more columns in the view
than there are today, you could imagine handling parallel-query cases with
a 4-way join; but ugh.)
The new function has the following behaviors that are painful or impossible
to get right via pg_locks:
1. Correctly understands which lock modes block which other ones.
2. In soft-block situations (two processes both waiting for conflicting lock
modes), only the one that's in front in the wait queue is reported to
block the other.
3. In parallel-query cases, reports all sessions blocking any member of
the given PID's lock group, and reports a session by naming its leader
process's PID, which will be the pg_backend_pid() value visible to
clients.
The motivation for doing this right now is mostly to fix the isolation
tests. Commit 38f8bdcac4982215beb9f65a19debecaf22fd470 lobotomized
isolationtester's is-it-waiting query by removing its ability to recognize
nonconflicting lock modes, as a crude workaround for the inability to
handle soft-block situations properly. But even without the lock mode
tests, the old query was excessively slow, particularly in
CLOBBER_CACHE_ALWAYS builds; some of our buildfarm animals fail the new
deadlock-hard test because the deadlock timeout elapses before they can
probe the waiting status of all eight sessions. Replacing the pg_locks
self-join with use of pg_blocking_pids() is not only much more correct, but
a lot faster: I measure it at about 9X faster in a typical dev build with
Asserts, and 3X faster in CLOBBER_CACHE_ALWAYS builds. That should provide
enough headroom for the slower CLOBBER_CACHE_ALWAYS animals to pass the
test, without having to lengthen deadlock_timeout yet more and thus slow
down the test for everyone else.
2016-02-22 20:31:43 +01:00
|
|
|
bool waiting;
|
2016-02-11 14:36:30 +01:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
res = PQexecPrepared(conns[0].conn, PREP_WAITING, 1,
|
|
|
|
&conns[step->session + 1].backend_pid_str,
|
2016-02-11 14:36:30 +01:00
|
|
|
NULL, NULL, 0);
|
Create a function to reliably identify which sessions block which others.
This patch introduces "pg_blocking_pids(int) returns int[]", which returns
the PIDs of any sessions that are blocking the session with the given PID.
Historically people have obtained such information using a self-join on
the pg_locks view, but it's unreasonably tedious to do it that way with any
modicum of correctness, and the addition of parallel queries has pretty
much broken that approach altogether. (Given some more columns in the view
than there are today, you could imagine handling parallel-query cases with
a 4-way join; but ugh.)
The new function has the following behaviors that are painful or impossible
to get right via pg_locks:
1. Correctly understands which lock modes block which other ones.
2. In soft-block situations (two processes both waiting for conflicting lock
modes), only the one that's in front in the wait queue is reported to
block the other.
3. In parallel-query cases, reports all sessions blocking any member of
the given PID's lock group, and reports a session by naming its leader
process's PID, which will be the pg_backend_pid() value visible to
clients.
The motivation for doing this right now is mostly to fix the isolation
tests. Commit 38f8bdcac4982215beb9f65a19debecaf22fd470 lobotomized
isolationtester's is-it-waiting query by removing its ability to recognize
nonconflicting lock modes, as a crude workaround for the inability to
handle soft-block situations properly. But even without the lock mode
tests, the old query was excessively slow, particularly in
CLOBBER_CACHE_ALWAYS builds; some of our buildfarm animals fail the new
deadlock-hard test because the deadlock timeout elapses before they can
probe the waiting status of all eight sessions. Replacing the pg_locks
self-join with use of pg_blocking_pids() is not only much more correct, but
a lot faster: I measure it at about 9X faster in a typical dev build with
Asserts, and 3X faster in CLOBBER_CACHE_ALWAYS builds. That should provide
enough headroom for the slower CLOBBER_CACHE_ALWAYS animals to pass the
test, without having to lengthen deadlock_timeout yet more and thus slow
down the test for everyone else.
2016-02-22 20:31:43 +01:00
|
|
|
if (PQresultStatus(res) != PGRES_TUPLES_OK ||
|
|
|
|
PQntuples(res) != 1)
|
2016-02-11 14:36:30 +01:00
|
|
|
{
|
|
|
|
fprintf(stderr, "lock wait query failed: %s",
|
2021-06-23 03:43:12 +02:00
|
|
|
PQerrorMessage(conns[0].conn));
|
2019-01-05 15:05:49 +01:00
|
|
|
exit(1);
|
2016-02-11 14:36:30 +01:00
|
|
|
}
|
Create a function to reliably identify which sessions block which others.
This patch introduces "pg_blocking_pids(int) returns int[]", which returns
the PIDs of any sessions that are blocking the session with the given PID.
Historically people have obtained such information using a self-join on
the pg_locks view, but it's unreasonably tedious to do it that way with any
modicum of correctness, and the addition of parallel queries has pretty
much broken that approach altogether. (Given some more columns in the view
than there are today, you could imagine handling parallel-query cases with
a 4-way join; but ugh.)
The new function has the following behaviors that are painful or impossible
to get right via pg_locks:
1. Correctly understands which lock modes block which other ones.
2. In soft-block situations (two processes both waiting for conflicting lock
modes), only the one that's in front in the wait queue is reported to
block the other.
3. In parallel-query cases, reports all sessions blocking any member of
the given PID's lock group, and reports a session by naming its leader
process's PID, which will be the pg_backend_pid() value visible to
clients.
The motivation for doing this right now is mostly to fix the isolation
tests. Commit 38f8bdcac4982215beb9f65a19debecaf22fd470 lobotomized
isolationtester's is-it-waiting query by removing its ability to recognize
nonconflicting lock modes, as a crude workaround for the inability to
handle soft-block situations properly. But even without the lock mode
tests, the old query was excessively slow, particularly in
CLOBBER_CACHE_ALWAYS builds; some of our buildfarm animals fail the new
deadlock-hard test because the deadlock timeout elapses before they can
probe the waiting status of all eight sessions. Replacing the pg_locks
self-join with use of pg_blocking_pids() is not only much more correct, but
a lot faster: I measure it at about 9X faster in a typical dev build with
Asserts, and 3X faster in CLOBBER_CACHE_ALWAYS builds. That should provide
enough headroom for the slower CLOBBER_CACHE_ALWAYS animals to pass the
test, without having to lengthen deadlock_timeout yet more and thus slow
down the test for everyone else.
2016-02-22 20:31:43 +01:00
|
|
|
waiting = ((PQgetvalue(res, 0, 0))[0] == 't');
|
2016-02-11 14:36:30 +01:00
|
|
|
PQclear(res);
|
|
|
|
|
Create a function to reliably identify which sessions block which others.
This patch introduces "pg_blocking_pids(int) returns int[]", which returns
the PIDs of any sessions that are blocking the session with the given PID.
Historically people have obtained such information using a self-join on
the pg_locks view, but it's unreasonably tedious to do it that way with any
modicum of correctness, and the addition of parallel queries has pretty
much broken that approach altogether. (Given some more columns in the view
than there are today, you could imagine handling parallel-query cases with
a 4-way join; but ugh.)
The new function has the following behaviors that are painful or impossible
to get right via pg_locks:
1. Correctly understands which lock modes block which other ones.
2. In soft-block situations (two processes both waiting for conflicting lock
modes), only the one that's in front in the wait queue is reported to
block the other.
3. In parallel-query cases, reports all sessions blocking any member of
the given PID's lock group, and reports a session by naming its leader
process's PID, which will be the pg_backend_pid() value visible to
clients.
The motivation for doing this right now is mostly to fix the isolation
tests. Commit 38f8bdcac4982215beb9f65a19debecaf22fd470 lobotomized
isolationtester's is-it-waiting query by removing its ability to recognize
nonconflicting lock modes, as a crude workaround for the inability to
handle soft-block situations properly. But even without the lock mode
tests, the old query was excessively slow, particularly in
CLOBBER_CACHE_ALWAYS builds; some of our buildfarm animals fail the new
deadlock-hard test because the deadlock timeout elapses before they can
probe the waiting status of all eight sessions. Replacing the pg_locks
self-join with use of pg_blocking_pids() is not only much more correct, but
a lot faster: I measure it at about 9X faster in a typical dev build with
Asserts, and 3X faster in CLOBBER_CACHE_ALWAYS builds. That should provide
enough headroom for the slower CLOBBER_CACHE_ALWAYS animals to pass the
test, without having to lengthen deadlock_timeout yet more and thus slow
down the test for everyone else.
2016-02-22 20:31:43 +01:00
|
|
|
if (waiting) /* waiting to acquire a lock */
|
2016-02-11 14:36:30 +01:00
|
|
|
{
|
2019-07-28 02:21:54 +02:00
|
|
|
/*
|
|
|
|
* Since it takes time to perform the lock-check query,
|
|
|
|
* some data --- notably, NOTICE messages --- might have
|
|
|
|
* arrived since we looked. We must call PQconsumeInput
|
|
|
|
* and then PQisBusy to collect and process any such
|
|
|
|
* messages. In the (unlikely) case that PQisBusy then
|
|
|
|
* returns false, we might as well go examine the
|
|
|
|
* available result.
|
|
|
|
*/
|
|
|
|
if (!PQconsumeInput(conn))
|
|
|
|
{
|
|
|
|
fprintf(stderr, "PQconsumeInput failed: %s\n",
|
|
|
|
PQerrorMessage(conn));
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
if (!PQisBusy(conn))
|
|
|
|
break;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* conn is still busy, so conclude that the step really is
|
|
|
|
* waiting.
|
|
|
|
*/
|
2016-02-12 23:12:23 +01:00
|
|
|
if (!(flags & STEP_RETRY))
|
|
|
|
printf("step %s: %s <waiting ...>\n",
|
|
|
|
step->name, step->sql);
|
2016-02-11 14:36:30 +01:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
/* else, not waiting */
|
2011-07-12 22:35:09 +02:00
|
|
|
}
|
|
|
|
|
2016-02-11 14:36:30 +01:00
|
|
|
/* Figure out how long we've been waiting for this step. */
|
|
|
|
gettimeofday(¤t_time, NULL);
|
|
|
|
td = (int64) current_time.tv_sec - (int64) start_time.tv_sec;
|
|
|
|
td *= USECS_PER_SEC;
|
|
|
|
td += (int64) current_time.tv_usec - (int64) start_time.tv_usec;
|
|
|
|
|
|
|
|
/*
|
2019-12-09 20:31:57 +01:00
|
|
|
* After max_step_wait microseconds, try to cancel the query.
|
2016-02-11 14:36:30 +01:00
|
|
|
*
|
2016-02-12 19:36:13 +01:00
|
|
|
* If the user tries to test an invalid permutation, we don't want
|
|
|
|
* to hang forever, especially when this is running in the
|
2018-12-11 05:15:42 +01:00
|
|
|
* buildfarm. This will presumably lead to this permutation
|
|
|
|
* failing, but remaining permutations and tests should still be
|
|
|
|
* OK.
|
2016-02-11 14:36:30 +01:00
|
|
|
*/
|
2019-12-09 20:31:57 +01:00
|
|
|
if (td > max_step_wait && !canceled)
|
2011-07-12 22:35:09 +02:00
|
|
|
{
|
2016-02-12 19:36:13 +01:00
|
|
|
PGcancel *cancel = PQgetCancel(conn);
|
2016-02-11 14:36:30 +01:00
|
|
|
|
|
|
|
if (cancel != NULL)
|
2016-02-11 17:30:46 +01:00
|
|
|
{
|
2016-02-12 19:36:13 +01:00
|
|
|
char buf[256];
|
2016-02-11 17:30:46 +01:00
|
|
|
|
|
|
|
if (PQcancel(cancel, buf, sizeof(buf)))
|
2019-12-09 20:31:57 +01:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* print to stdout not stderr, as this should appear
|
|
|
|
* in the test case's results
|
|
|
|
*/
|
|
|
|
printf("isolationtester: canceling step %s after %d seconds\n",
|
|
|
|
step->name, (int) (td / USECS_PER_SEC));
|
2016-02-11 17:30:46 +01:00
|
|
|
canceled = true;
|
2019-12-09 20:31:57 +01:00
|
|
|
}
|
2016-02-11 17:30:46 +01:00
|
|
|
else
|
|
|
|
fprintf(stderr, "PQcancel failed: %s\n", buf);
|
2016-02-11 14:36:30 +01:00
|
|
|
PQfreeCancel(cancel);
|
2016-02-11 17:30:46 +01:00
|
|
|
}
|
2016-02-11 14:36:30 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2019-12-09 20:31:57 +01:00
|
|
|
* After twice max_step_wait, just give up and die.
|
2016-02-11 14:36:30 +01:00
|
|
|
*
|
|
|
|
* Since cleanup steps won't be run in this case, this may cause
|
|
|
|
* later tests to fail. That stinks, but it's better than waiting
|
|
|
|
* forever for the server to respond to the cancel.
|
|
|
|
*/
|
2019-12-09 20:31:57 +01:00
|
|
|
if (td > 2 * max_step_wait)
|
2016-02-11 14:36:30 +01:00
|
|
|
{
|
2019-12-09 20:31:57 +01:00
|
|
|
fprintf(stderr, "step %s timed out after %d seconds\n",
|
|
|
|
step->name, (int) (td / USECS_PER_SEC));
|
2019-01-05 15:05:49 +01:00
|
|
|
exit(1);
|
2011-07-12 22:35:09 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
else if (!PQconsumeInput(conn)) /* select(): data available */
|
|
|
|
{
|
2012-01-28 23:55:08 +01:00
|
|
|
fprintf(stderr, "PQconsumeInput failed: %s\n",
|
|
|
|
PQerrorMessage(conn));
|
2019-01-05 15:05:49 +01:00
|
|
|
exit(1);
|
2011-07-12 22:35:09 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
/*
|
|
|
|
* The step is done, but we won't report it as complete so long as there
|
|
|
|
* are blockers.
|
|
|
|
*/
|
|
|
|
if (step_has_blocker(pstep))
|
|
|
|
{
|
|
|
|
if (!(flags & STEP_RETRY))
|
|
|
|
printf("step %s: %s <waiting ...>\n",
|
|
|
|
step->name, step->sql);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Otherwise, go ahead and complete it. */
|
2011-07-12 22:35:09 +02:00
|
|
|
if (flags & STEP_RETRY)
|
|
|
|
printf("step %s: <... completed>\n", step->name);
|
|
|
|
else
|
|
|
|
printf("step %s: %s\n", step->name, step->sql);
|
|
|
|
|
|
|
|
while ((res = PQgetResult(conn)))
|
|
|
|
{
|
|
|
|
switch (PQresultStatus(res))
|
|
|
|
{
|
|
|
|
case PGRES_COMMAND_OK:
|
2021-06-23 03:43:12 +02:00
|
|
|
case PGRES_EMPTY_QUERY:
|
2011-07-12 22:35:09 +02:00
|
|
|
break;
|
|
|
|
case PGRES_TUPLES_OK:
|
|
|
|
printResultSet(res);
|
|
|
|
break;
|
|
|
|
case PGRES_FATAL_ERROR:
|
2013-04-03 03:15:37 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Detail may contain XID values, so we want to just show
|
|
|
|
* primary. Beware however that libpq-generated error results
|
|
|
|
* may not contain subfields, only an old-style message.
|
|
|
|
*/
|
|
|
|
{
|
|
|
|
const char *sev = PQresultErrorField(res,
|
|
|
|
PG_DIAG_SEVERITY);
|
|
|
|
const char *msg = PQresultErrorField(res,
|
|
|
|
PG_DIAG_MESSAGE_PRIMARY);
|
|
|
|
|
|
|
|
if (sev && msg)
|
2021-06-23 03:43:12 +02:00
|
|
|
printf("%s: %s\n", sev, msg);
|
2013-04-03 03:15:37 +02:00
|
|
|
else
|
2021-06-23 03:43:12 +02:00
|
|
|
printf("%s\n", PQresultErrorMessage(res));
|
2013-04-03 03:15:37 +02:00
|
|
|
}
|
2011-07-12 22:35:09 +02:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
printf("unexpected result status: %s\n",
|
|
|
|
PQresStatus(PQresultStatus(res)));
|
|
|
|
}
|
|
|
|
PQclear(res);
|
|
|
|
}
|
|
|
|
|
2019-07-28 18:02:27 +02:00
|
|
|
/* Report any available NOTIFY messages, too */
|
|
|
|
PQconsumeInput(conn);
|
|
|
|
while ((notify = PQnotifies(conn)) != NULL)
|
|
|
|
{
|
|
|
|
/* Try to identify which session it came from */
|
|
|
|
const char *sendername = NULL;
|
|
|
|
char pidstring[32];
|
2021-06-23 03:43:12 +02:00
|
|
|
int i;
|
2019-07-28 18:02:27 +02:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
for (i = 0; i < testspec->nsessions; i++)
|
2019-07-28 18:02:27 +02:00
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
if (notify->be_pid == conns[i + 1].backend_pid)
|
2019-07-28 18:02:27 +02:00
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
sendername = conns[i + 1].sessionname;
|
2019-07-28 18:02:27 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (sendername == NULL)
|
|
|
|
{
|
|
|
|
/* Doesn't seem to be any test session, so show the hard way */
|
|
|
|
snprintf(pidstring, sizeof(pidstring), "PID %d", notify->be_pid);
|
|
|
|
sendername = pidstring;
|
|
|
|
}
|
|
|
|
printf("%s: NOTIFY \"%s\" with payload \"%s\" from %s\n",
|
|
|
|
testspec->sessions[step->session]->name,
|
|
|
|
notify->relname, notify->extra, sendername);
|
|
|
|
PQfreemem(notify);
|
|
|
|
PQconsumeInput(conn);
|
|
|
|
}
|
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
/* Connection is now idle. */
|
|
|
|
iconn->active_step = NULL;
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Detect whether a step has any unsatisfied blocker conditions */
|
|
|
|
static bool
|
|
|
|
step_has_blocker(PermutationStep *pstep)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < pstep->nblockers; i++)
|
|
|
|
{
|
|
|
|
PermutationStepBlocker *blocker = pstep->blockers[i];
|
|
|
|
IsoConnInfo *iconn;
|
|
|
|
|
|
|
|
switch (blocker->blocktype)
|
|
|
|
{
|
|
|
|
case PSB_ONCE:
|
|
|
|
/* Ignore; try_complete_step handles this specially */
|
|
|
|
break;
|
|
|
|
case PSB_OTHER_STEP:
|
|
|
|
/* Block if referenced step is active */
|
|
|
|
iconn = &conns[1 + blocker->step->session];
|
|
|
|
if (iconn->active_step &&
|
|
|
|
iconn->active_step->step == blocker->step)
|
|
|
|
return true;
|
|
|
|
break;
|
|
|
|
case PSB_NUM_NOTICES:
|
|
|
|
/* Block if not enough notices received yet */
|
|
|
|
iconn = &conns[1 + blocker->step->session];
|
|
|
|
if (iconn->total_notices < blocker->target_notices)
|
|
|
|
return true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2011-07-12 22:35:09 +02:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
static void
|
|
|
|
printResultSet(PGresult *res)
|
|
|
|
{
|
2021-06-23 17:12:31 +02:00
|
|
|
PQprintOpt popt;
|
2011-04-10 17:42:00 +02:00
|
|
|
|
2021-06-23 17:12:31 +02:00
|
|
|
memset(&popt, 0, sizeof(popt));
|
|
|
|
popt.header = true;
|
|
|
|
popt.align = true;
|
|
|
|
popt.fieldSep = "|";
|
|
|
|
PQprint(stdout, res, &popt);
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
}
|
2018-11-09 17:08:00 +01:00
|
|
|
|
2021-06-23 03:43:12 +02:00
|
|
|
/* notice processor for regular user sessions */
|
2018-11-09 17:08:00 +01:00
|
|
|
static void
|
|
|
|
isotesterNoticeProcessor(void *arg, const char *message)
|
|
|
|
{
|
2021-06-23 03:43:12 +02:00
|
|
|
IsoConnInfo *myconn = (IsoConnInfo *) arg;
|
|
|
|
|
|
|
|
/* Prefix the backend's message with the session name. */
|
|
|
|
printf("%s: %s", myconn->sessionname, message);
|
|
|
|
/* Record notices, since we may need this to decide to unblock a step. */
|
|
|
|
myconn->total_notices++;
|
|
|
|
any_new_notice = true;
|
2018-11-09 17:08:00 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* notice processor, hides the message */
|
|
|
|
static void
|
|
|
|
blackholeNoticeProcessor(void *arg, const char *message)
|
|
|
|
{
|
|
|
|
/* do nothing */
|
|
|
|
}
|