Implement genuine serializable isolation level.

Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.

To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.

A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.

Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.

We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.

Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.

Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
This commit is contained in:
Heikki Linnakangas 2011-02-07 23:46:51 +02:00
parent c18f51da17
commit dafaa3efb7
90 changed files with 14995 additions and 271 deletions

View File

@ -490,6 +490,13 @@
<entry>Can an index of this type be clustered on?</entry>
</row>
<row>
<entry><structfield>ampredlocks</structfield></entry>
<entry><type>bool</type></entry>
<entry></entry>
<entry>Does an index of this type manage fine-grained predicate locks?</entry>
</row>
<row>
<entry><structfield>amkeytype</structfield></entry>
<entry><type>oid</type></entry>
@ -6577,7 +6584,7 @@
<entry><type>text</type></entry>
<entry></entry>
<entry>Name of the lock mode held or desired by this process (see <xref
linkend="locking-tables">)</entry>
linkend="locking-tables"> and <xref linkend="xact-serializable">)</entry>
</row>
<row>
<entry><structfield>granted</structfield></entry>

View File

@ -4456,6 +4456,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
<varlistentry id="guc-default-transaction-isolation" xreflabel="default_transaction_isolation">
<indexterm>
<primary>transaction isolation level</primary>
<secondary>setting default</secondary>
</indexterm>
<indexterm>
<primary><varname>default_transaction_isolation</> configuration parameter</primary>
@ -4481,6 +4482,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
<varlistentry id="guc-default-transaction-read-only" xreflabel="default_transaction_read_only">
<indexterm>
<primary>read-only transaction</primary>
<secondary>setting default</secondary>
</indexterm>
<indexterm>
<primary><varname>default_transaction_read_only</> configuration parameter</primary>
@ -4500,6 +4502,41 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
</listitem>
</varlistentry>
<varlistentry id="guc-default-transaction-deferrable" xreflabel="default_transaction_deferrable">
<indexterm>
<primary>deferrable transaction</primary>
<secondary>setting default</secondary>
</indexterm>
<indexterm>
<primary><varname>default_transaction_deferrable</> configuration parameter</primary>
</indexterm>
<term><varname>default_transaction_deferrable</varname> (<type>boolean</type>)</term>
<listitem>
<para>
When running at the <literal>serializable</> isolation level,
a deferrable read-only SQL transaction may be delayed before
it is allowed to proceed. However, once it begins executing
it does not incur any of the overhead required to ensure
serializability; so serialization code will have no reason to
force it to abort because of concurrent updates, making this
option suitable for long-running read-only transactions.
</para>
<para>
This parameter controls the default deferrable status of each
new transaction. It currently has no effect on read-write
transactions or those operating at isolation levels lower
than <literal>serializable</>. The default is <literal>off</>.
</para>
<para>
Consult <xref linkend="sql-set-transaction"> for more information.
</para>
</listitem>
</varlistentry>
<varlistentry id="guc-session-replication-role" xreflabel="session_replication_role">
<term><varname>session_replication_role</varname> (<type>enum</type>)</term>
<indexterm>
@ -5125,6 +5162,39 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
</listitem>
</varlistentry>
<varlistentry id="guc-max-predicate-locks-per-transaction" xreflabel="max_predicate_locks_per_transaction">
<term><varname>max_predicate_locks_per_transaction</varname> (<type>integer</type>)</term>
<indexterm>
<primary><varname>max_predicate_locks_per_transaction</> configuration parameter</primary>
</indexterm>
<listitem>
<para>
The shared predicate lock table tracks locks on
<varname>max_predicate_locks_per_transaction</varname> * (<xref
linkend="guc-max-connections"> + <xref
linkend="guc-max-prepared-transactions">) objects (e.g., tables);
hence, no more than this many distinct objects can be locked at
any one time. This parameter controls the average number of object
locks allocated for each transaction; individual transactions
can lock more objects as long as the locks of all transactions
fit in the lock table. This is <emphasis>not</> the number of
rows that can be locked; that value is unlimited. The default,
64, has generally been sufficient in testing, but you might need to
raise this value if you have clients that touch many different
tables in a single serializable transaction. This parameter can
only be set at server start.
</para>
<para>
Increasing this parameter might cause <productname>PostgreSQL</>
to request more <systemitem class="osname">System V</> shared
memory than your operating system's default configuration
allows. See <xref linkend="sysvipc"> for information on how to
adjust those parameters, if necessary.
</para>
</listitem>
</varlistentry>
</variablelist>
</sect1>

View File

@ -1916,6 +1916,15 @@ LOG: database system is ready to accept read only connections
your setting of <varname>max_prepared_transactions</> is 0.
</para>
</listitem>
<listitem>
<para>
The Serializable transaction isolation level is not yet available in hot
standby. (See <xref linkend="xact-serializable"> and
<xref linkend="serializable-consistency"> for details.)
An attempt to set a transaction to the serializable isolation level in
hot standby mode will generate an error.
</para>
</listitem>
</itemizedlist>
</para>

View File

@ -705,6 +705,19 @@ amrestrpos (IndexScanDesc scan);
it is only safe to use such scans with MVCC-compliant snapshots.
</para>
<para>
When the <structfield>ampredlocks</> flag is not set, any scan using that
index access method within a serializable transaction will acquire a
non-blocking predicate lock on the full index. This will generate a
read-write conflict with the insert of any tuple into that index by a
concurrent serializable transaction. If certain patterns of read-write
conflicts are detected among a set of concurrent serializable
transactions, one of those transactions may be cancelled to protect data
integrity. When the flag is set, it indicates that the index access
method implements finer-grained predicate locking, which will tend to
reduce the frequency of such transaction cancellations.
</para>
</sect1>
<sect1 id="index-unique-checks">

View File

@ -256,7 +256,7 @@ int lo_open(PGconn *conn, Oid lobjId, int mode);
from a descriptor opened with <symbol>INV_WRITE</symbol> returns
data that reflects all writes of other committed transactions as well
as writes of the current transaction. This is similar to the behavior
of <literal>SERIALIZABLE</> versus <literal>READ COMMITTED</> transaction
of <literal>REPEATABLE READ</> versus <literal>READ COMMITTED</> transaction
modes for ordinary SQL <command>SELECT</> commands.
</para>

View File

@ -20,10 +20,22 @@
<sect1 id="mvcc-intro">
<title>Introduction</title>
<indexterm>
<primary>Multiversion Concurrency Control</primary>
</indexterm>
<indexterm>
<primary>MVCC</primary>
</indexterm>
<indexterm>
<primary>Serializable Snapshot Isolation</primary>
</indexterm>
<indexterm>
<primary>SSI</primary>
</indexterm>
<para>
<productname>PostgreSQL</productname> provides a rich set of tools
for developers to manage concurrent access to data. Internally,
@ -37,7 +49,7 @@
could be caused by (other) concurrent transaction updates on the same
data rows, providing <firstterm>transaction isolation</firstterm>
for each database session. <acronym>MVCC</acronym>, by eschewing
explicit locking methodologies of traditional database systems,
the locking methodologies of traditional database systems,
minimizes lock contention in order to allow for reasonable
performance in multiuser environments.
</para>
@ -48,12 +60,17 @@
<acronym>MVCC</acronym> locks acquired for querying (reading) data
do not conflict with locks acquired for writing data, and so
reading never blocks writing and writing never blocks reading.
<productname>PostgreSQL</productname> maintains this guarantee
even when providing the strictest level of transaction
isolation through the use of an innovative <firstterm>Serializable
Snapshot Isolation</firstterm> (<acronym>SSI</acronym>) level.
</para>
<para>
Table- and row-level locking facilities are also available in
<productname>PostgreSQL</productname> for applications that cannot
adapt easily to <acronym>MVCC</acronym> behavior. However, proper
<productname>PostgreSQL</productname> for applications which don't
generally need full transaction isolation and prefer to explicitly
manage particular points of conflict. However, proper
use of <acronym>MVCC</acronym> will generally provide better
performance than locks. In addition, application-defined advisory
locks provide a mechanism for acquiring locks that are not tied
@ -70,9 +87,21 @@
<para>
The <acronym>SQL</acronym> standard defines four levels of
transaction isolation in terms of three phenomena that must be
prevented between concurrent transactions. These undesirable
phenomena are:
transaction isolation. The most strict is Serializable,
which is defined by the standard in a paragraph which says that any
concurrent execution of a set of Serializable transactions is guaranteed
to produce the same effect as running them one at a time in some order.
The other three levels are defined in terms of phenomena, resulting from
interaction between concurrent transactions, which must not occur at
each level. The standard notes that due to the definition of
Serializable, none of these phenomena are possible at that level. (This
is hardly surprising -- if the effect of the transactions must be
consistent with having been run one at a time, how could you see any
phenomena caused by interactions?)
</para>
<para>
The phenomena which are prohibited are various levels are:
<variablelist>
<varlistentry>
@ -211,15 +240,16 @@
<para>
In <productname>PostgreSQL</productname>, you can request any of the
four standard transaction isolation levels. But internally, there are
only two distinct isolation levels, which correspond to the levels Read
Committed and Serializable. When you select the level Read
Uncommitted you really get Read Committed, and when you select
Repeatable Read you really get Serializable, so the actual
only three distinct isolation levels, which correspond to the levels Read
Committed, Repeatable Read, and Serializable. When you select the level Read
Uncommitted you really get Read Committed, and phantom reads are not possible
in the <productname>PostgreSQL</productname> implementation of Repeatable
Read, so the actual
isolation level might be stricter than what you select. This is
permitted by the SQL standard: the four isolation levels only
define which phenomena must not happen, they do not define which
phenomena must happen. The reason that <productname>PostgreSQL</>
only provides two isolation levels is that this is the only
only provides three isolation levels is that this is the only
sensible way to map the standard isolation levels to the multiversion
concurrency control architecture. The behavior of the available
isolation levels is detailed in the following subsections.
@ -238,6 +268,10 @@
<secondary>read committed</secondary>
</indexterm>
<indexterm>
<primary>read committed</primary>
</indexterm>
<para>
<firstterm>Read Committed</firstterm> is the default isolation
level in <productname>PostgreSQL</productname>. When a transaction
@ -345,39 +379,46 @@ COMMIT;
</para>
</sect2>
<sect2 id="xact-serializable">
<title>Serializable Isolation Level</title>
<sect2 id="xact-repeatable-read">
<title>Repeatable Read Isolation Level</title>
<indexterm>
<primary>transaction isolation level</primary>
<secondary>serializable</secondary>
<secondary>repeatable read</secondary>
</indexterm>
<indexterm>
<primary>repeatable read</primary>
</indexterm>
<para>
The <firstterm>Serializable</firstterm> isolation level provides the strictest transaction
isolation. This level emulates serial transaction execution,
as if transactions had been executed one after another, serially,
rather than concurrently. However, applications using this level must
be prepared to retry transactions due to serialization failures.
The <firstterm>Repeatable Read</firstterm> isolation level only sees
data committed before the transaction began; it never sees either
uncommitted data or changes committed during transaction execution
by concurrent transactions. (However, the query does see the
effects of previous updates executed within its own transaction,
even though they are not yet committed.) This is a stronger
guarantee than is required by the <acronym>SQL</acronym> standard
for this isolation level, and prevents all of the phenomena described
in <xref linkend="mvcc-isolevel-table">. As mentioned above, this is
specifically allowed by the standard, which only describes the
<emphasis>minimum</emphasis> protections each isolation level must
provide.
</para>
<para>
When a transaction is using the serializable level,
a <command>SELECT</command> query only sees data committed before the
transaction began; it never sees either uncommitted data or changes
committed
during transaction execution by concurrent transactions. (However,
the query does see the effects of previous updates
executed within its own transaction, even though they are not yet
committed.) This is different from Read Committed in that
a query in a serializable transaction
sees a snapshot as of the start of the <emphasis>transaction</>,
not as of the start
This level is different from Read Committed in that a query in a
repeatable read transaction sees a snapshot as of the start of the
<emphasis>transaction</>, not as of the start
of the current query within the transaction. Thus, successive
<command>SELECT</command> commands within a <emphasis>single</>
transaction see the same data, i.e., they do not see changes made by
other transactions that committed after their own transaction started.
(This behavior can be ideal for reporting applications.)
</para>
<para>
Applications using this level must be prepared to retry transactions
due to serialization failures.
</para>
<para>
@ -386,22 +427,21 @@ COMMIT;
behave the same as <command>SELECT</command>
in terms of searching for target rows: they will only find target rows
that were committed as of the transaction start time. However, such a
target
row might have already been updated (or deleted or locked) by
target row might have already been updated (or deleted or locked) by
another concurrent transaction by the time it is found. In this case, the
serializable transaction will wait for the first updating transaction to commit or
repeatable read transaction will wait for the first updating transaction to commit or
roll back (if it is still in progress). If the first updater rolls back,
then its effects are negated and the serializable transaction can proceed
then its effects are negated and the repeatable read transaction can proceed
with updating the originally found row. But if the first updater commits
(and actually updated or deleted the row, not just locked it)
then the serializable transaction will be rolled back with the message
then the repeatable read transaction will be rolled back with the message
<screen>
ERROR: could not serialize access due to concurrent update
</screen>
because a serializable transaction cannot modify or lock rows changed by
other transactions after the serializable transaction began.
because a repeatable read transaction cannot modify or lock rows changed by
other transactions after the repeatable read transaction began.
</para>
<para>
@ -419,39 +459,70 @@ ERROR: could not serialize access due to concurrent update
</para>
<para>
The Serializable mode provides a rigorous guarantee that each
transaction sees a wholly consistent view of the database. However,
the application has to be prepared to retry transactions when concurrent
updates make it impossible to sustain the illusion of serial execution.
Since the cost of redoing complex transactions can be significant,
serializable mode is recommended only when updating transactions contain logic
sufficiently complex that they might give wrong answers in Read
Committed mode. Most commonly, Serializable mode is necessary when
a transaction executes several successive commands that must see
identical views of the database.
The Repeatable Read mode provides a rigorous guarantee that each
transaction sees a completely stable view of the database. However,
this view will not necessarily always be consistent with some serial
(one at a time) execution of concurrent transactions of the same level.
For example, even a read only transaction at this level may see a
control record updated to show that a batch has been completed but
<emphasis>not</emphasis> see one of the detail records which is logically
part of the batch because it read an earlier revision of the control
record. Attempts to enforce business rules by transactions running at
this isolation level are not likely to work correctly without careful use
of explicit locks to block conflicting transactions.
</para>
<sect3 id="mvcc-serializability">
<title>Serializable Isolation Versus True Serializability</title>
<note>
<para>
Prior to <productname>PostgreSQL</productname> version 9.1, a request
for the Serializable transaction isolation level provided exactly the
same behavior described here. To retain the legacy Serializable
behavior, Repeatable Read should now be requested.
</para>
</note>
</sect2>
<sect2 id="xact-serializable">
<title>Serializable Isolation Level</title>
<indexterm>
<primary>serializability</primary>
<primary>transaction isolation level</primary>
<secondary>serializable</secondary>
</indexterm>
<indexterm>
<primary>serializable</primary>
</indexterm>
<indexterm>
<primary>predicate locking</primary>
</indexterm>
<indexterm>
<primary>serialization anomaly</primary>
</indexterm>
<para>
The intuitive meaning (and mathematical definition) of
<quote>serializable</> execution is that any two successfully committed
concurrent transactions will appear to have executed strictly serially,
one after the other &mdash; although which one appeared to occur first might
not be predictable in advance. It is important to realize that forbidding
the undesirable behaviors listed in <xref linkend="mvcc-isolevel-table">
is not sufficient to guarantee true serializability, and in fact
<productname>PostgreSQL</productname>'s Serializable mode <emphasis>does
not guarantee serializable execution in this sense</>. As an example,
The <firstterm>Serializable</firstterm> isolation level provides the strictest transaction
isolation. This level emulates serial transaction execution,
as if transactions had been executed one after another, serially,
rather than concurrently. However, like the Repeatable Read level,
applications using this level must
be prepared to retry transactions due to serialization failures.
In fact, this isolation level works exactly the same as Repeatable
Read except that it monitors for conditions which could make
execution of a concurrent set of serializable transactions behave
in a manner inconsistent with all possible serial (one at a time)
executions of those transactions. This monitoring does not
introduce any blocking beyond that present in repeatable read, but
there is some overhead to the monitoring, and detection of the
conditions which could cause a
<firstterm>serialization anomaly</firstterm> will trigger a
<firstterm>serialization failure</firstterm>.
</para>
<para>
As an example,
consider a table <structname>mytab</>, initially containing:
<screen>
class | value
@ -472,48 +543,137 @@ SELECT SUM(value) FROM mytab WHERE class = 1;
SELECT SUM(value) FROM mytab WHERE class = 2;
</screen>
and obtains the result 300, which it inserts in a new row with
<structfield>class</><literal> = 1</>. Then both transactions commit. None of
the listed undesirable behaviors have occurred, yet we have a result
that could not have occurred in either order serially. If A had
<structfield>class</><literal> = 1</>. Then both transactions try to commit.
If either transaction were running at the Repeatable Read isolation level,
both would be allowed to commit; but since there is no serial order of execution
consistent with the result, using Serializable transactions will allow one
transaction to commit and and will roll the other back with this message:
<screen>
ERROR: could not serialize access due to read/write dependencies among transactions
</screen>
This is because if A had
executed before B, B would have computed the sum 330, not 300, and
similarly the other order would have resulted in a different sum
computed by A.
</para>
<para>
To guarantee true mathematical serializability, it is necessary for
a database system to enforce <firstterm>predicate locking</>, which
means that a transaction cannot insert or modify a row that would
have matched the <literal>WHERE</> condition of a query in another concurrent
transaction. For example, once transaction A has executed the query
<literal>SELECT ... WHERE class = 1</>, a predicate-locking system
would forbid transaction B from inserting any new row with class 1
until A has committed.
<footnote>
<para>
Essentially, a predicate-locking system prevents phantom reads
by restricting what is written, whereas MVCC prevents them by
restricting what is read.
</para>
</footnote>
Such a locking system is complex to
implement and extremely expensive in execution, since every session must
be aware of the details of every query executed by every concurrent
transaction. And this large expense is mostly wasted, since in
practice most applications do not do the sorts of things that could
result in problems. (Certainly the example above is rather contrived
and unlikely to represent real software.) For these reasons,
<productname>PostgreSQL</productname> does not implement predicate
locking.
To guarantee true serializability <productname>PostgreSQL</productname>
uses <firstterm>predicate locking</>, which means that it keeps locks
which allow it to determine when a write would have had an impact on
the result of a previous read from a concurrent transaction, had it run
first. In <productname>PostgreSQL</productname> these locks do not
cause any blocking and therefore can <emphasis>not</> play any part in
causing a deadlock. They are used to identify and flag dependencies
among concurrent serializable transactions which in certain combinations
can lead to serialization anomalies. In contrast, a Read Committed or
Repeatable Read transaction which wants to ensure data consistency may
need to take out a lock on an entire table, which could block other
users attempting to use that table, or it may use <literal>SELECT FOR
UPDATE</literal> or <literal>SELECT FOR SHARE</literal> which not only
can block other transactions but cause disk access.
</para>
<para>
In cases where the possibility of non-serializable execution
is a real hazard, problems can be prevented by appropriate use of
explicit locking. Further discussion appears in the following
sections.
Predicate locks in <productname>PostgreSQL</productname>, like in most
other database systems, are based on data actually accessed by a
transaction. These will show up in the
<link linkend="view-pg-locks"><structname>pg_locks</structname></link>
system view with a <literal>mode</> of <literal>SIReadLock</>. The
particular locks
acquired during execution of a query will depend on the plan used by
the query, and multiple finer-grained locks (e.g., tuple locks) may be
combined into fewer coarser-grained locks (e.g., page locks) during the
course of the transaction to prevent exhaustion of the memory used to
track the locks. A <literal>READ ONLY</> transaction may be able to
release its SIRead locks before completion, if it detects that no
conflicts can still occur which could lead to a serialization anomaly.
In fact, <literal>READ ONLY</> transactions will often be able to
establish that fact at startup and avoid taking any predicate locks.
If you explicitly request a <literal>SERIALIZABLE READ ONLY DEFERRABLE</>
transaction, it will block until it can establish this fact. (This is
the <emphasis>only</> case where Serializable transactions block but
Repeatable Read transactions don't.) On the other hand, SIRead locks
often need to be kept past transaction commit, until overlapping read
write transactions complete.
</para>
</sect3>
<para>
Consistent use of Serializable transactions can simplify development.
The guarantee that any set of concurrent serializable transactions will
have the same effect as if they were run one at a time means that if
you can demonstrate that a singe transaction, as written, will do the
right thing when run by itself, you can have confidence that it will
do the right thing in any mix of serializable transactions, even without
any information about what those other transactions might do. It is
important that an environment which uses this technique have a
generalized way of handling serialization failures (which always return
with a SQLSTATE value of '40001'), because it will be very hard to
predict exactly which transactions might contribute to the read/write
dependencies and need to be rolled back to prevent serialization
anomalies. The monitoring of read/write dependences has a cost, as does
the restart of transactions which are terminated with a serialization
failure, but balanced against the cost and blocking involved in use of
explicit locks and <literal>SELECT FOR UPDATE</> or <literal>SELECT FOR
SHARE</>, Serializable transactions are the best performance choice
for some environments.
</para>
<para>
For optimal performance when relying on Serializable transactions for
concurrency control, these issues should be considered:
<itemizedlist>
<listitem>
<para>
Declare transactions as <literal>READ ONLY</> when possible.
</para>
</listitem>
<listitem>
<para>
Control the number of active connections, using a connection pool if
needed. This is always an important performance consideration, but
it can be paricularly important in a busy system using Serializable
transactions.
</para>
</listitem>
<listitem>
<para>
Don't put more into a single transaction than needed for integrity
purposes.
</para>
</listitem>
<listitem>
<para>
Don't leave connections dangling <quote>idle in transaction</quote>
longer than necessary.
</para>
</listitem>
<listitem>
<para>
Eliminate explicit locks, <literal>SELECT FOR UPDATE</>, and
<literal>SELECT FOR SHARE</> where no longer needed due to the
protections automatically provided by Serializable transactions.
</para>
</listitem>
</itemizedlist>
</para>
<warning>
<para>
Support for the Serializable transaction isolation level has not yet
been added to Hot Standby replication targets (described in
<xref linkend="hot-standby">). The strictest isolation level currently
supported in hot standby mode is Repeatable Read. While performing all
permanent database writes within Serializable transactions on the
master will ensure that all standbys will eventually reach a consistent
state, a Repeatable Read transaction run on the standby can sometimes
see a transient state which in inconsistent with any serial execution
of serializable transactions on the master.
</para>
</warning>
</sect2>
</sect1>
@ -1109,80 +1269,148 @@ SELECT pg_advisory_lock(q.id) FROM
<title>Data Consistency Checks at the Application Level</title>
<para>
Because readers in <productname>PostgreSQL</productname>
do not lock data, regardless of
transaction isolation level, data read by one transaction can be
overwritten by another concurrent transaction. In other words,
if a row is returned by <command>SELECT</command> it doesn't mean that
the row is still current at the instant it is returned (i.e., sometime
after the current query began). The row might have been modified or
deleted by an already-committed transaction that committed after
the <command>SELECT</command> started.
Even if the row is still valid <quote>now</>, it could be changed or
deleted
before the current transaction does a commit or rollback.
It is very difficult to enforce business rules regarding data integrity
using Read Committed transactions because the view of the data is
shifting with each statement, and even a single statement may not
restrict itself to the statement's snapshot if a write conflict occurs.
</para>
<para>
Another way to think about it is that each
transaction sees a snapshot of the database contents, and concurrently
executing transactions might very well see different snapshots. So the
whole concept of <quote>now</quote> is somewhat ill-defined anyway.
This is not normally
a big problem if the client applications are isolated from each other,
but if the clients can communicate via channels outside the database
then serious confusion might ensue.
While a Repeatable Read transaction has a stable view of the data
throughout its execution, there is a subtle issue with using
<acronym>MVCC</acronym> snapshots for data consistency checks, involving
something known as <firstterm>read/write conflicts</firstterm>.
If one transaction writes data and a concurrent transaction attempts
to read the same data (whether before or after the write), it cannot
see the work of the other transaction. The reader then appears to have
executed first regardless of which started first or which committed
first. If that is as far as it goes, there is no problem, but
if the reader also writes data which is read by a concurrent transaction
there is now a transaction which appears to have run before either of
the previously mentioned transactions. If the transaction which appears
to have executed last actually commits first, it is very easy for a
cycle to appear in a graph of the order of execution of the transactions.
When such a cycle appears, integrity checks will not work correctly
without some help.
</para>
<para>
To ensure the current validity of a row and protect it against
concurrent updates one must use <command>SELECT FOR UPDATE</command>,
<command>SELECT FOR SHARE</command>, or an appropriate <command>LOCK
TABLE</command> statement. (<command>SELECT FOR UPDATE</command>
and <command>SELECT FOR SHARE</command> lock just the
returned rows against concurrent updates, while <command>LOCK
TABLE</command> locks the whole table.) This should be taken into
account when porting applications to
<productname>PostgreSQL</productname> from other environments.
As mentioned in <xref linkend="xact-serializable">, Serializable
transactions are just Repeatable Read transactions which add
non-blocking monitoring for dangerous patterns of read/write conflicts.
When a pattern is detected which could cause a cycle in the apparent
order of execution, one of the transactions involved is rolled back to
break the cycle.
</para>
<para>
Global validity checks require extra thought under <acronym>MVCC</acronym>.
For example, a banking application might wish to check that the sum of
all credits in one table equals the sum of debits in another table,
when both tables are being actively updated. Comparing the results of two
successive <literal>SELECT sum(...)</literal> commands will not work reliably in
Read Committed mode, since the second query will likely include the results
of transactions not counted by the first. Doing the two sums in a
single serializable transaction will give an accurate picture of only the
effects of transactions that committed before the serializable transaction
started &mdash; but one might legitimately wonder whether the answer is still
relevant by the time it is delivered. If the serializable transaction
itself applied some changes before trying to make the consistency check,
the usefulness of the check becomes even more debatable, since now it
includes some but not all post-transaction-start changes. In such cases
a careful person might wish to lock all tables needed for the check,
in order to get an indisputable picture of current reality. A
<literal>SHARE</> mode (or higher) lock guarantees that there are no
uncommitted changes in the locked table, other than those of the current
transaction.
</para>
<sect2 id="serializable-consistency">
<title>Enforcing Consistency With Serializable Transactions</title>
<para>
Note also that if one is relying on explicit locking to prevent concurrent
changes, one should either use Read Committed mode, or in Serializable
mode be careful to obtain
locks before performing queries. A lock obtained by a
serializable transaction guarantees that no other transactions modifying
the table are still running, but if the snapshot seen by the
transaction predates obtaining the lock, it might predate some now-committed
changes in the table. A serializable transaction's snapshot is actually
frozen at the start of its first query or data-modification command
(<literal>SELECT</>, <literal>INSERT</>,
<literal>UPDATE</>, or <literal>DELETE</>), so
it is possible to obtain locks explicitly before the snapshot is
frozen.
</para>
<para>
If the Serializable transaction isolation level is used for all writes
and for all reads which need a consistent view of the data, no other
effort is required to ensure consistency. Software from other
environments which is written to use serializable transactions to
ensure consistency should <quote>just work</quote> in this regard in
<productname>PostgreSQL</productname>.
</para>
<para>
When using this technique, it will avoid creating an unnecessary burden
for application programmers if the application software goes through a
framework which automatically retries transactions which are rolled
back with a serialization failure. It may be a good idea to set
<literal>default_transaction_isolation</> to <literal>serializable</>.
It would also be wise to take some action to ensure that no other
transaction isolation level is used, either inadvertently or to
subvert integrity checks, through checks of the transaction isolation
level in triggers.
</para>
<para>
See <xref linkend="xact-serializable"> for performance suggestions.
</para>
<warning>
<para>
This level of integrity protection using Serializable transactions
does not yet extend to hot standby mode (<xref linkend="hot-standby">).
Because of that, those using hot standby may want to use Repeatable
Read and explicit locking.on the master.
</para>
</warning>
</sect2>
<sect2 id="non-serializable-consistency">
<title>Enforcing Consistency With Explicit Blocking Locks</title>
<para>
When non-serializable writes are possible,
to ensure the current validity of a row and protect it against
concurrent updates one must use <command>SELECT FOR UPDATE</command>,
<command>SELECT FOR SHARE</command>, or an appropriate <command>LOCK
TABLE</command> statement. (<command>SELECT FOR UPDATE</command>
and <command>SELECT FOR SHARE</command> lock just the
returned rows against concurrent updates, while <command>LOCK
TABLE</command> locks the whole table.) This should be taken into
account when porting applications to
<productname>PostgreSQL</productname> from other environments.
</para>
<para>
Also of note to those converting from other environments is the fact
that <command>SELECT FOR UPDATE</command> does not ensure that a
concurrent transaction will not update or delete a selected row.
To do that in <productname>PostgreSQL</productname> you must actually
update the row, even if no values need to be changed.
<command>SELECT FOR UPDATE</command> <emphasis>temporarily blocks</emphasis>
other transactions from acquiring the same lock or executing an
<command>UPDATE</command> or <command>DELETE</command> which would
affect the locked row, but once the transaction holding this lock
commits or rolls back, a blocked transaction will proceed with the
conflicting operation unless an actual <command>UPDATE</command> of
the row was performed while the lock was held.
</para>
<para>
Global validity checks require extra thought under
non-serializable <acronym>MVCC</acronym>.
For example, a banking application might wish to check that the sum of
all credits in one table equals the sum of debits in another table,
when both tables are being actively updated. Comparing the results of two
successive <literal>SELECT sum(...)</literal> commands will not work reliably in
Read Committed mode, since the second query will likely include the results
of transactions not counted by the first. Doing the two sums in a
single repeatable read transaction will give an accurate picture of only the
effects of transactions that committed before the repeatable read transaction
started &mdash; but one might legitimately wonder whether the answer is still
relevant by the time it is delivered. If the repeatable read transaction
itself applied some changes before trying to make the consistency check,
the usefulness of the check becomes even more debatable, since now it
includes some but not all post-transaction-start changes. In such cases
a careful person might wish to lock all tables needed for the check,
in order to get an indisputable picture of current reality. A
<literal>SHARE</> mode (or higher) lock guarantees that there are no
uncommitted changes in the locked table, other than those of the current
transaction.
</para>
<para>
Note also that if one is relying on explicit locking to prevent concurrent
changes, one should either use Read Committed mode, or in Repeatable Read
mode be careful to obtain
locks before performing queries. A lock obtained by a
repeatable read transaction guarantees that no other transactions modifying
the table are still running, but if the snapshot seen by the
transaction predates obtaining the lock, it might predate some now-committed
changes in the table. A repeatable read transaction's snapshot is actually
frozen at the start of its first query or data-modification command
(<literal>SELECT</>, <literal>INSERT</>,
<literal>UPDATE</>, or <literal>DELETE</>), so
it is possible to obtain locks explicitly before the snapshot is
frozen.
</para>
</sect2>
</sect1>
<sect1 id="locking-indexes">

View File

@ -27,6 +27,7 @@ BEGIN [ WORK | TRANSACTION ] [ <replaceable class="parameter">transaction_mode</
ISOLATION LEVEL { SERIALIZABLE | REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED }
READ WRITE | READ ONLY
[ NOT ] DEFERRABLE
</synopsis>
</refsynopsisdiv>
@ -57,7 +58,7 @@ BEGIN [ WORK | TRANSACTION ] [ <replaceable class="parameter">transaction_mode</
</para>
<para>
If the isolation level or read/write mode is specified, the new
If the isolation level, read/write mode, or deferrable mode is specified, the new
transaction has those characteristics, as if
<xref linkend="sql-set-transaction">
was executed.
@ -135,6 +136,12 @@ BEGIN;
contains additional compatibility information.
</para>
<para>
The <literal>DEFERRABLE</literal>
<replaceable class="parameter">transaction_mode</replaceable>
is a <productname>PostgreSQL</productname> language extension.
</para>
<para>
Incidentally, the <literal>BEGIN</literal> key word is used for a
different purpose in embedded SQL. You are advised to be careful

View File

@ -67,10 +67,12 @@ LOCK [ TABLE ] [ ONLY ] <replaceable class="PARAMETER">name</replaceable> [, ...
</para>
<para>
To achieve a similar effect when running a transaction at the Serializable
To achieve a similar effect when running a transaction at the
<literal>REPEATABLE READ</> or <literal>SERIALIZABLE</>
isolation level, you have to execute the <command>LOCK TABLE</> statement
before executing any <command>SELECT</> or data modification statement.
A serializable transaction's view of data will be frozen when its first
A <literal>REPEATABLE READ</> or <literal>SERIALIZABLE</> transaction's
view of data will be frozen when its first
<command>SELECT</> or data modification statement begins. A <command>LOCK
TABLE</> later in the transaction will still prevent concurrent writes
&mdash; but it won't ensure that what the transaction reads corresponds to

View File

@ -646,6 +646,41 @@ PostgreSQL documentation
</listitem>
</varlistentry>
<varlistentry>
<term><option>--serializable-deferrable</option></term>
<listitem>
<para>
Use a <literal>serializable</literal> transaction for the dump, to
ensure that the snapshot used is consistent with later database
states; but do this by waiting for a point in the transaction stream
at which no anomalies can be present, so that there isn't a risk of
the dump failing or causing other transactions to roll back with a
<literal>serialization_failure</literal>. See <xref linkend="mvcc">
for more information about transaction isolation and concurrency
control.
</para>
<para>
This option is not beneficial for a dump which is intended only for
disaster recovery. It could be useful for a dump used to load a
copy of the database for reporting or other read-only load sharing
while the original database continues to be updated. Without it the
dump may reflect a state which is not consistent with any serial
execution of the transactions eventually committed. For example, if
batch processing techniques are used, a batch may show as closed in
the dump without all of the items which are in the batch appearing.
</para>
<para>
This option will make no difference if there are no read-write
transactions active when pg_dump is started. If read-write
transactions are active, the start of the dump may be delayed for an
indeterminate length of time. Once running, performance with or
without the switch is the same.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><option>--no-tablespaces</option></term>
<listitem>

View File

@ -1144,7 +1144,7 @@ FOR SHARE [ OF <replaceable class="parameter">table_name</replaceable> [, ...] ]
has already locked a selected row or rows, <command>SELECT FOR
UPDATE</command> will wait for the other transaction to complete,
and will then lock and return the updated row (or no row, if the
row was deleted). Within a <literal>SERIALIZABLE</> transaction,
row was deleted). Within a <literal>REPEATABLE READ</> or <literal>SERIALIZABLE</> transaction,
however, an error will be thrown if a row to be locked has changed
since the transaction started. For further discussion see <xref
linkend="mvcc">.

View File

@ -15,6 +15,21 @@
<primary>SET TRANSACTION</primary>
</indexterm>
<indexterm>
<primary>transaction isolation level</primary>
<secondary>setting</secondary>
</indexterm>
<indexterm>
<primary>read-only transaction</primary>
<secondary>setting</secondary>
</indexterm>
<indexterm>
<primary>deferrable transaction</primary>
<secondary>setting</secondary>
</indexterm>
<refsynopsisdiv>
<synopsis>
SET TRANSACTION <replaceable class="parameter">transaction_mode</replaceable> [, ...]
@ -24,6 +39,7 @@ SET SESSION CHARACTERISTICS AS TRANSACTION <replaceable class="parameter">transa
ISOLATION LEVEL { SERIALIZABLE | REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED }
READ WRITE | READ ONLY
[ NOT ] DEFERRABLE
</synopsis>
</refsynopsisdiv>
@ -42,8 +58,8 @@ SET SESSION CHARACTERISTICS AS TRANSACTION <replaceable class="parameter">transa
<para>
The available transaction characteristics are the transaction
isolation level and the transaction access mode (read/write or
read-only).
isolation level, the transaction access mode (read/write or
read-only), and the deferrable mode.
</para>
<para>
@ -62,7 +78,7 @@ SET SESSION CHARACTERISTICS AS TRANSACTION <replaceable class="parameter">transa
</varlistentry>
<varlistentry>
<term><literal>SERIALIZABLE</literal></term>
<term><literal>REPEATABLE READ</literal></term>
<listitem>
<para>
All statements of the current transaction can only see rows committed
@ -71,14 +87,27 @@ SET SESSION CHARACTERISTICS AS TRANSACTION <replaceable class="parameter">transa
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>SERIALIZABLE</literal></term>
<listitem>
<para>
All statements of the current transaction can only see rows committed
before the first query or data-modification statement was executed in
this transaction. If a pattern of reads and writes among concurrent
serializable transactions would create a situation which could not
have occurred for any serial (one-at-a-time) execution of those
transactions, one of them will be rolled back with a
<literal>serialization_failure</literal> <literal>SQLSTATE</literal>.
</para>
</listitem>
</varlistentry>
</variablelist>
The SQL standard defines two additional levels, <literal>READ
UNCOMMITTED</literal> and <literal>REPEATABLE READ</literal>.
The SQL standard defines one additional level, <literal>READ
UNCOMMITTED</literal>.
In <productname>PostgreSQL</productname> <literal>READ
UNCOMMITTED</literal> is treated as
<literal>READ COMMITTED</literal>, while <literal>REPEATABLE
READ</literal> is treated as <literal>SERIALIZABLE</literal>.
UNCOMMITTED</literal> is treated as <literal>READ COMMITTED</literal>.
</para>
<para>
@ -127,8 +156,9 @@ SET SESSION CHARACTERISTICS AS TRANSACTION <replaceable class="parameter">transa
<para>
The session default transaction modes can also be set by setting the
configuration parameters <xref linkend="guc-default-transaction-isolation">
and <xref linkend="guc-default-transaction-read-only">.
configuration parameters <xref linkend="guc-default-transaction-isolation">,
<xref linkend="guc-default-transaction-read-only">, and
<xref linkend="guc-default-transaction-deferrable">.
(In fact <command>SET SESSION CHARACTERISTICS</command> is just a
verbose equivalent for setting these variables with <command>SET</>.)
This means the defaults can be set in the configuration file, via
@ -146,9 +176,7 @@ SET SESSION CHARACTERISTICS AS TRANSACTION <replaceable class="parameter">transa
isolation level in the standard. In
<productname>PostgreSQL</productname> the default is ordinarily
<literal>READ COMMITTED</literal>, but you can change it as
mentioned above. Because of lack of predicate locking, the
<literal>SERIALIZABLE</literal> level is not truly
serializable. See <xref linkend="mvcc"> for details.
mentioned above.
</para>
<para>
@ -158,6 +186,12 @@ SET SESSION CHARACTERISTICS AS TRANSACTION <replaceable class="parameter">transa
not implemented in the <productname>PostgreSQL</productname> server.
</para>
<para>
The <literal>DEFERRABLE</literal>
<replaceable class="parameter">transaction_mode</replaceable>
is a <productname>PostgreSQL</productname> language extension.
</para>
<para>
The SQL standard requires commas between successive <replaceable
class="parameter">transaction_modes</replaceable>, but for historical

View File

@ -27,6 +27,7 @@ START TRANSACTION [ <replaceable class="parameter">transaction_mode</replaceable
ISOLATION LEVEL { SERIALIZABLE | REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED }
READ WRITE | READ ONLY
[ NOT ] DEFERRABLE
</synopsis>
</refsynopsisdiv>
@ -34,8 +35,8 @@ START TRANSACTION [ <replaceable class="parameter">transaction_mode</replaceable
<title>Description</title>
<para>
This command begins a new transaction block. If the isolation level or
read/write mode is specified, the new transaction has those
This command begins a new transaction block. If the isolation level,
read/write mode, or deferrable mode is specified, the new transaction has those
characteristics, as if <xref linkend="sql-set-transaction"> was executed. This is the same
as the <xref linkend="sql-begin"> command.
</para>
@ -64,6 +65,12 @@ START TRANSACTION [ <replaceable class="parameter">transaction_mode</replaceable
as a convenience.
</para>
<para>
The <literal>DEFERRABLE</literal>
<replaceable class="parameter">transaction_mode</replaceable>
is a <productname>PostgreSQL</productname> language extension.
</para>
<para>
The SQL standard requires commas between successive <replaceable
class="parameter">transaction_modes</replaceable>, but for historical

View File

@ -340,7 +340,7 @@ SPI_execute("INSERT INTO foo SELECT * FROM bar", false, 5);
<function>SPI_execute</function> increments the command
counter and computes a new <firstterm>snapshot</> before executing each
command in the string. The snapshot does not actually change if the
current transaction isolation level is <literal>SERIALIZABLE</>, but in
current transaction isolation level is <literal>SERIALIZABLE</> or <literal>REPEATABLE READ</>, but in
<literal>READ COMMITTED</> mode the snapshot update allows each command to
see the results of newly committed transactions from other sessions.
This is essential for consistent behavior when the commands are modifying

View File

@ -57,6 +57,7 @@
#include "storage/bufmgr.h"
#include "storage/freespace.h"
#include "storage/lmgr.h"
#include "storage/predicate.h"
#include "storage/procarray.h"
#include "storage/smgr.h"
#include "storage/standby.h"
@ -261,20 +262,20 @@ heapgetpage(HeapScanDesc scan, BlockNumber page)
{
if (ItemIdIsNormal(lpp))
{
HeapTupleData loctup;
bool valid;
loctup.t_data = (HeapTupleHeader) PageGetItem((Page) dp, lpp);
loctup.t_len = ItemIdGetLength(lpp);
ItemPointerSet(&(loctup.t_self), page, lineoff);
if (all_visible)
valid = true;
else
{
HeapTupleData loctup;
loctup.t_data = (HeapTupleHeader) PageGetItem((Page) dp, lpp);
loctup.t_len = ItemIdGetLength(lpp);
ItemPointerSet(&(loctup.t_self), page, lineoff);
valid = HeapTupleSatisfiesVisibility(&loctup, snapshot, buffer);
}
CheckForSerializableConflictOut(valid, scan->rs_rd, &loctup, buffer);
if (valid)
scan->rs_vistuples[ntup++] = lineoff;
}
@ -468,12 +469,16 @@ heapgettup(HeapScanDesc scan,
snapshot,
scan->rs_cbuf);
CheckForSerializableConflictOut(valid, scan->rs_rd, tuple, scan->rs_cbuf);
if (valid && key != NULL)
HeapKeyTest(tuple, RelationGetDescr(scan->rs_rd),
nkeys, key, valid);
if (valid)
{
if (!scan->rs_relpredicatelocked)
PredicateLockTuple(scan->rs_rd, tuple);
LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
return;
}
@ -741,12 +746,16 @@ heapgettup_pagemode(HeapScanDesc scan,
nkeys, key, valid);
if (valid)
{
if (!scan->rs_relpredicatelocked)
PredicateLockTuple(scan->rs_rd, tuple);
scan->rs_cindex = lineindex;
return;
}
}
else
{
if (!scan->rs_relpredicatelocked)
PredicateLockTuple(scan->rs_rd, tuple);
scan->rs_cindex = lineindex;
return;
}
@ -1213,6 +1222,7 @@ heap_beginscan_internal(Relation relation, Snapshot snapshot,
scan->rs_strategy = NULL; /* set in initscan */
scan->rs_allow_strat = allow_strat;
scan->rs_allow_sync = allow_sync;
scan->rs_relpredicatelocked = false;
/*
* we can use page-at-a-time mode if it's an MVCC-safe snapshot
@ -1459,8 +1469,13 @@ heap_fetch(Relation relation,
*/
valid = HeapTupleSatisfiesVisibility(tuple, snapshot, buffer);
if (valid)
PredicateLockTuple(relation, tuple);
LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
CheckForSerializableConflictOut(valid, relation, tuple, buffer);
if (valid)
{
/*
@ -1506,13 +1521,15 @@ heap_fetch(Relation relation,
* heap_fetch, we do not report any pgstats count; caller may do so if wanted.
*/
bool
heap_hot_search_buffer(ItemPointer tid, Buffer buffer, Snapshot snapshot,
bool *all_dead)
heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
Snapshot snapshot, bool *all_dead)
{
Page dp = (Page) BufferGetPage(buffer);
TransactionId prev_xmax = InvalidTransactionId;
OffsetNumber offnum;
bool at_chain_start;
bool valid;
bool match_found;
if (all_dead)
*all_dead = true;
@ -1522,6 +1539,7 @@ heap_hot_search_buffer(ItemPointer tid, Buffer buffer, Snapshot snapshot,
Assert(ItemPointerGetBlockNumber(tid) == BufferGetBlockNumber(buffer));
offnum = ItemPointerGetOffsetNumber(tid);
at_chain_start = true;
match_found = false;
/* Scan through possible multiple members of HOT-chain */
for (;;)
@ -1552,6 +1570,8 @@ heap_hot_search_buffer(ItemPointer tid, Buffer buffer, Snapshot snapshot,
heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
heapTuple.t_len = ItemIdGetLength(lp);
heapTuple.t_tableOid = relation->rd_id;
heapTuple.t_self = *tid;
/*
* Shouldn't see a HEAP_ONLY tuple at chain start.
@ -1569,12 +1589,18 @@ heap_hot_search_buffer(ItemPointer tid, Buffer buffer, Snapshot snapshot,
break;
/* If it's visible per the snapshot, we must return it */
if (HeapTupleSatisfiesVisibility(&heapTuple, snapshot, buffer))
valid = HeapTupleSatisfiesVisibility(&heapTuple, snapshot, buffer);
CheckForSerializableConflictOut(valid, relation, &heapTuple, buffer);
if (valid)
{
ItemPointerSetOffsetNumber(tid, offnum);
PredicateLockTuple(relation, &heapTuple);
if (all_dead)
*all_dead = false;
return true;
if (IsolationIsSerializable())
match_found = true;
else
return true;
}
/*
@ -1603,7 +1629,7 @@ heap_hot_search_buffer(ItemPointer tid, Buffer buffer, Snapshot snapshot,
break; /* end of chain */
}
return false;
return match_found;
}
/*
@ -1622,7 +1648,7 @@ heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
LockBuffer(buffer, BUFFER_LOCK_SHARE);
result = heap_hot_search_buffer(tid, buffer, snapshot, all_dead);
result = heap_hot_search_buffer(tid, relation, buffer, snapshot, all_dead);
LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
ReleaseBuffer(buffer);
return result;
@ -1729,6 +1755,7 @@ heap_get_latest_tid(Relation relation,
* result candidate.
*/
valid = HeapTupleSatisfiesVisibility(&tp, snapshot, buffer);
CheckForSerializableConflictOut(valid, relation, &tp, buffer);
if (valid)
*tid = ctid;
@ -1893,6 +1920,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
buffer = RelationGetBufferForTuple(relation, heaptup->t_len,
InvalidBuffer, options, bistate);
/*
* We're about to do the actual insert -- check for conflict at the
* relation or buffer level first, to avoid possibly having to roll
* back work we've just done.
*/
CheckForSerializableConflictIn(relation, NULL, buffer);
/* NO EREPORT(ERROR) from here till changes are logged */
START_CRIT_SECTION();
@ -2193,6 +2227,12 @@ l1:
return result;
}
/*
* We're about to do the actual delete -- check for conflict first,
* to avoid possibly having to roll back work we've just done.
*/
CheckForSerializableConflictIn(relation, &tp, buffer);
/* replace cid with a combo cid if necessary */
HeapTupleHeaderAdjustCmax(tp.t_data, &cid, &iscombo);
@ -2546,6 +2586,12 @@ l2:
return result;
}
/*
* We're about to do the actual update -- check for conflict first,
* to avoid possibly having to roll back work we've just done.
*/
CheckForSerializableConflictIn(relation, &oldtup, buffer);
/* Fill in OID and transaction status data for newtup */
if (relation->rd_rel->relhasoids)
{
@ -2690,6 +2736,16 @@ l2:
heaptup = newtup;
}
/*
* We're about to create the new tuple -- check for conflict first,
* to avoid possibly having to roll back work we've just done.
*
* NOTE: For a tuple insert, we only need to check for table locks, since
* predicate locking at the index level will cover ranges for anything
* except a table scan. Therefore, only provide the relation.
*/
CheckForSerializableConflictIn(relation, NULL, InvalidBuffer);
/*
* At this point newbuf and buffer are both pinned and locked, and newbuf
* has enough space for the new tuple. If they are the same buffer, only
@ -2799,6 +2855,12 @@ l2:
END_CRIT_SECTION();
/*
* Any existing SIREAD locks on the old tuple must be linked to the new
* tuple for conflict detection purposes.
*/
PredicateLockTupleRowVersionLink(relation, &oldtup, newtup);
if (newbuf != buffer)
LockBuffer(newbuf, BUFFER_LOCK_UNLOCK);
LockBuffer(buffer, BUFFER_LOCK_UNLOCK);

View File

@ -64,9 +64,11 @@
#include "access/relscan.h"
#include "access/transam.h"
#include "access/xact.h"
#include "pgstat.h"
#include "storage/bufmgr.h"
#include "storage/lmgr.h"
#include "storage/predicate.h"
#include "utils/relcache.h"
#include "utils/snapmgr.h"
#include "utils/tqual.h"
@ -192,6 +194,11 @@ index_insert(Relation indexRelation,
RELATION_CHECKS;
GET_REL_PROCEDURE(aminsert);
if (!(indexRelation->rd_am->ampredlocks))
CheckForSerializableConflictIn(indexRelation,
(HeapTuple) NULL,
InvalidBuffer);
/*
* have the am's insert proc do all the work.
*/
@ -266,6 +273,9 @@ index_beginscan_internal(Relation indexRelation,
RELATION_CHECKS;
GET_REL_PROCEDURE(ambeginscan);
if (!(indexRelation->rd_am->ampredlocks))
PredicateLockRelation(indexRelation);
/*
* We hold a reference count to the relcache entry throughout the scan.
*/
@ -523,6 +533,7 @@ index_getnext(IndexScanDesc scan, ScanDirection direction)
{
ItemId lp;
ItemPointer ctid;
bool valid;
/* check for bogus TID */
if (offnum < FirstOffsetNumber ||
@ -577,8 +588,13 @@ index_getnext(IndexScanDesc scan, ScanDirection direction)
break;
/* If it's visible per the snapshot, we must return it */
if (HeapTupleSatisfiesVisibility(heapTuple, scan->xs_snapshot,
scan->xs_cbuf))
valid = HeapTupleSatisfiesVisibility(heapTuple, scan->xs_snapshot,
scan->xs_cbuf);
CheckForSerializableConflictOut(valid, scan->heapRelation,
heapTuple, scan->xs_cbuf);
if (valid)
{
/*
* If the snapshot is MVCC, we know that it could accept at
@ -586,7 +602,8 @@ index_getnext(IndexScanDesc scan, ScanDirection direction)
* any more members. Otherwise, check for continuation of the
* HOT-chain, and set state for next time.
*/
if (IsMVCCSnapshot(scan->xs_snapshot))
if (IsMVCCSnapshot(scan->xs_snapshot)
&& !IsolationIsSerializable())
scan->xs_next_hot = InvalidOffsetNumber;
else if (HeapTupleIsHotUpdated(heapTuple))
{
@ -598,6 +615,8 @@ index_getnext(IndexScanDesc scan, ScanDirection direction)
else
scan->xs_next_hot = InvalidOffsetNumber;
PredicateLockTuple(scan->heapRelation, heapTuple);
LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
pgstat_count_heap_fetch(scan->indexRelation);

View File

@ -21,6 +21,7 @@
#include "miscadmin.h"
#include "storage/bufmgr.h"
#include "storage/lmgr.h"
#include "storage/predicate.h"
#include "utils/inval.h"
#include "utils/tqual.h"
@ -174,6 +175,14 @@ top:
if (checkUnique != UNIQUE_CHECK_EXISTING)
{
/*
* The only conflict predicate locking cares about for indexes is when
* an index tuple insert conflicts with an existing lock. Since the
* actual location of the insert is hard to predict because of the
* random search used to prevent O(N^2) performance when there are many
* duplicate entries, we can just use the "first valid" page.
*/
CheckForSerializableConflictIn(rel, NULL, buf);
/* do the insertion */
_bt_findinsertloc(rel, &buf, &offset, natts, itup_scankey, itup, heapRel);
_bt_insertonpg(rel, buf, stack, itup, offset, false);
@ -696,6 +705,9 @@ _bt_insertonpg(Relation rel,
/* split the buffer into left and right halves */
rbuf = _bt_split(rel, buf, firstright,
newitemoff, itemsz, itup, newitemonleft);
PredicateLockPageSplit(rel,
BufferGetBlockNumber(buf),
BufferGetBlockNumber(rbuf));
/*----------
* By here,

View File

@ -29,6 +29,7 @@
#include "storage/freespace.h"
#include "storage/indexfsm.h"
#include "storage/lmgr.h"
#include "storage/predicate.h"
#include "utils/inval.h"
#include "utils/snapmgr.h"
@ -1183,6 +1184,12 @@ _bt_pagedel(Relation rel, Buffer buf, BTStack stack)
rightsib, opaque->btpo_prev, target,
RelationGetRelationName(rel));
/*
* Any insert which would have gone on the target block will now go to the
* right sibling block.
*/
PredicateLockPageCombine(rel, target, rightsib);
/*
* Next find and write-lock the current parent of the target page. This is
* essentially the same as the corresponding step of splitting.

View File

@ -29,6 +29,7 @@
#include "storage/indexfsm.h"
#include "storage/ipc.h"
#include "storage/lmgr.h"
#include "storage/predicate.h"
#include "storage/smgr.h"
#include "utils/memutils.h"
@ -822,6 +823,7 @@ restart:
if (_bt_page_recyclable(page))
{
/* Okay to recycle this page */
Assert(!PageIsPredicateLocked(rel, blkno));
RecordFreeIndexPage(rel, blkno);
vstate->totFreePages++;
stats->pages_deleted++;

View File

@ -21,6 +21,7 @@
#include "miscadmin.h"
#include "pgstat.h"
#include "storage/bufmgr.h"
#include "storage/predicate.h"
#include "utils/lsyscache.h"
#include "utils/rel.h"
@ -63,7 +64,10 @@ _bt_search(Relation rel, int keysz, ScanKey scankey, bool nextkey,
/* If index is empty and access = BT_READ, no root page is created. */
if (!BufferIsValid(*bufP))
{
PredicateLockRelation(rel); /* Nothing finer to lock exists. */
return (BTStack) NULL;
}
/* Loop iterates once per level descended in the tree */
for (;;)
@ -88,7 +92,11 @@ _bt_search(Relation rel, int keysz, ScanKey scankey, bool nextkey,
page = BufferGetPage(*bufP);
opaque = (BTPageOpaque) PageGetSpecialPointer(page);
if (P_ISLEAF(opaque))
{
if (access == BT_READ)
PredicateLockPage(rel, BufferGetBlockNumber(*bufP));
break;
}
/*
* Find the appropriate item on the internal page, and get the child
@ -1142,6 +1150,7 @@ _bt_steppage(IndexScanDesc scan, ScanDirection dir)
opaque = (BTPageOpaque) PageGetSpecialPointer(page);
if (!P_IGNORE(opaque))
{
PredicateLockPage(rel, blkno);
/* see if there are any matches on this page */
/* note that this will clear moreRight if we can stop */
if (_bt_readpage(scan, dir, P_FIRSTDATAKEY(opaque)))
@ -1189,6 +1198,7 @@ _bt_steppage(IndexScanDesc scan, ScanDirection dir)
opaque = (BTPageOpaque) PageGetSpecialPointer(page);
if (!P_IGNORE(opaque))
{
PredicateLockPage(rel, BufferGetBlockNumber(so->currPos.buf));
/* see if there are any matches on this page */
/* note that this will clear moreLeft if we can stop */
if (_bt_readpage(scan, dir, PageGetMaxOffsetNumber(page)))
@ -1352,6 +1362,7 @@ _bt_get_endpoint(Relation rel, uint32 level, bool rightmost)
if (!BufferIsValid(buf))
{
/* empty index... */
PredicateLockRelation(rel); /* Nothing finer to lock exists. */
return InvalidBuffer;
}
@ -1431,10 +1442,12 @@ _bt_endpoint(IndexScanDesc scan, ScanDirection dir)
if (!BufferIsValid(buf))
{
/* empty index... */
PredicateLockRelation(rel); /* Nothing finer to lock exists. */
so->currPos.buf = InvalidBuffer;
return false;
}
PredicateLockPage(rel, BufferGetBlockNumber(buf));
page = BufferGetPage(buf);
opaque = (BTPageOpaque) PageGetSpecialPointer(page);
Assert(P_ISLEAF(opaque));

View File

@ -57,6 +57,7 @@
#include "pgstat.h"
#include "replication/walsender.h"
#include "storage/fd.h"
#include "storage/predicate.h"
#include "storage/procarray.h"
#include "storage/sinvaladt.h"
#include "storage/smgr.h"
@ -1357,6 +1358,8 @@ FinishPreparedTransaction(const char *gid, bool isCommit)
else
ProcessRecords(bufptr, xid, twophase_postabort_callbacks);
PredicateLockTwoPhaseFinish(xid, isCommit);
/* Count the prepared xact as committed or aborted */
AtEOXact_PgStat(isCommit);

View File

@ -18,12 +18,14 @@
#include "access/twophase_rmgr.h"
#include "pgstat.h"
#include "storage/lock.h"
#include "storage/predicate.h"
const TwoPhaseCallback twophase_recover_callbacks[TWOPHASE_RM_MAX_ID + 1] =
{
NULL, /* END ID */
lock_twophase_recover, /* Lock */
predicatelock_twophase_recover, /* PredicateLock */
NULL, /* pgstat */
multixact_twophase_recover /* MultiXact */
};
@ -32,6 +34,7 @@ const TwoPhaseCallback twophase_postcommit_callbacks[TWOPHASE_RM_MAX_ID + 1] =
{
NULL, /* END ID */
lock_twophase_postcommit, /* Lock */
NULL, /* PredicateLock */
pgstat_twophase_postcommit, /* pgstat */
multixact_twophase_postcommit /* MultiXact */
};
@ -40,6 +43,7 @@ const TwoPhaseCallback twophase_postabort_callbacks[TWOPHASE_RM_MAX_ID + 1] =
{
NULL, /* END ID */
lock_twophase_postabort, /* Lock */
NULL, /* PredicateLock */
pgstat_twophase_postabort, /* pgstat */
multixact_twophase_postabort /* MultiXact */
};
@ -48,6 +52,7 @@ const TwoPhaseCallback twophase_standby_recover_callbacks[TWOPHASE_RM_MAX_ID + 1
{
NULL, /* END ID */
lock_twophase_standby_recover, /* Lock */
NULL, /* PredicateLock */
NULL, /* pgstat */
NULL /* MultiXact */
};

View File

@ -21,6 +21,7 @@
#include "miscadmin.h"
#include "postmaster/autovacuum.h"
#include "storage/pmsignal.h"
#include "storage/predicate.h"
#include "storage/proc.h"
#include "utils/builtins.h"
#include "utils/syscache.h"
@ -161,6 +162,10 @@ GetNewTransactionId(bool isSubXact)
ExtendCLOG(xid);
ExtendSUBTRANS(xid);
/* If it's top level, the predicate locking system also needs to know. */
if (!isSubXact)
RegisterPredicateLockingXid(xid);
/*
* Now advance the nextXid counter. This must not happen until after we
* have successfully completed ExtendCLOG() --- if that routine fails, we

View File

@ -40,6 +40,7 @@
#include "storage/bufmgr.h"
#include "storage/fd.h"
#include "storage/lmgr.h"
#include "storage/predicate.h"
#include "storage/procarray.h"
#include "storage/sinvaladt.h"
#include "storage/smgr.h"
@ -63,6 +64,9 @@ int XactIsoLevel;
bool DefaultXactReadOnly = false;
bool XactReadOnly;
bool DefaultXactDeferrable = false;
bool XactDeferrable;
bool XactSyncCommit = true;
int CommitDelay = 0; /* precommit delay in microseconds */
@ -1640,6 +1644,7 @@ StartTransaction(void)
s->startedInRecovery = false;
XactReadOnly = DefaultXactReadOnly;
}
XactDeferrable = DefaultXactDeferrable;
XactIsoLevel = DefaultXactIsoLevel;
forceSyncCommit = false;
MyXactAccessedTempRel = false;
@ -1786,6 +1791,13 @@ CommitTransaction(void)
/* close large objects before lower-level cleanup */
AtEOXact_LargeObject(true);
/*
* Mark serializable transaction as complete for predicate locking
* purposes. This should be done as late as we can put it and still
* allow errors to be raised for failure patterns found at commit.
*/
PreCommit_CheckForSerializationFailure();
/*
* Insert notifications sent by NOTIFY commands into the queue. This
* should be late in the pre-commit sequence to minimize time spent
@ -1980,6 +1992,13 @@ PrepareTransaction(void)
/* close large objects before lower-level cleanup */
AtEOXact_LargeObject(true);
/*
* Mark serializable transaction as complete for predicate locking
* purposes. This should be done as late as we can put it and still
* allow errors to be raised for failure patterns found at commit.
*/
PreCommit_CheckForSerializationFailure();
/* NOTIFY will be handled below */
/*
@ -2044,6 +2063,7 @@ PrepareTransaction(void)
AtPrepare_Notify();
AtPrepare_Locks();
AtPrepare_PredicateLocks();
AtPrepare_PgStat();
AtPrepare_MultiXact();
AtPrepare_RelationMap();
@ -2103,6 +2123,7 @@ PrepareTransaction(void)
PostPrepare_MultiXact(xid);
PostPrepare_Locks(xid);
PostPrepare_PredicateLocks(xid);
ResourceOwnerRelease(TopTransactionResourceOwner,
RESOURCE_RELEASE_LOCKS,

View File

@ -616,6 +616,15 @@ assign_XactIsoLevel(const char *value, bool doit, GucSource source)
errmsg("SET TRANSACTION ISOLATION LEVEL must not be called in a subtransaction")));
return NULL;
}
/* Can't go to serializable mode while recovery is still active */
if (RecoveryInProgress() && strcmp(value, "serializable") == 0)
{
ereport(GUC_complaint_elevel(source),
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot use serializable mode in a hot standby"),
errhint("You can use REPEATABLE READ instead.")));
return false;
}
}
if (strcmp(value, "serializable") == 0)
@ -667,6 +676,35 @@ show_XactIsoLevel(void)
}
}
/*
* SET TRANSACTION [NOT] DEFERRABLE
*/
bool
assign_transaction_deferrable(bool newval, bool doit, GucSource source)
{
/* source == PGC_S_OVERRIDE means do it anyway, eg at xact abort */
if (source == PGC_S_OVERRIDE)
return true;
if (IsSubTransaction())
{
ereport(GUC_complaint_elevel(source),
(errcode(ERRCODE_ACTIVE_SQL_TRANSACTION),
errmsg("SET TRANSACTION [NOT] DEFERRABLE cannot be called within a subtransaction")));
return false;
}
if (FirstSnapshotSet)
{
ereport(GUC_complaint_elevel(source),
(errcode(ERRCODE_ACTIVE_SQL_TRANSACTION),
errmsg("SET TRANSACTION [NOT] DEFERRABLE must be called before any query")));
return false;
}
return true;
}
/*
* Random number seed

View File

@ -42,6 +42,7 @@
#include "executor/nodeBitmapHeapscan.h"
#include "pgstat.h"
#include "storage/bufmgr.h"
#include "storage/predicate.h"
#include "utils/memutils.h"
#include "utils/snapmgr.h"
#include "utils/tqual.h"
@ -351,7 +352,7 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
ItemPointerData tid;
ItemPointerSet(&tid, page, offnum);
if (heap_hot_search_buffer(&tid, buffer, snapshot, NULL))
if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot, NULL))
scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
}
}

View File

@ -28,6 +28,7 @@
#include "access/relscan.h"
#include "executor/execdebug.h"
#include "executor/nodeSeqscan.h"
#include "storage/predicate.h"
static void InitScanRelation(SeqScanState *node, EState *estate);
static TupleTableSlot *SeqNext(SeqScanState *node);
@ -105,11 +106,15 @@ SeqRecheck(SeqScanState *node, TupleTableSlot *slot)
* tuple.
* We call the ExecScan() routine and pass it the appropriate
* access method functions.
* For serializable transactions, we first acquire a predicate
* lock on the entire relation.
* ----------------------------------------------------------------
*/
TupleTableSlot *
ExecSeqScan(SeqScanState *node)
{
PredicateLockRelation(node->ss_currentRelation);
node->ss_currentScanDesc->rs_relpredicatelocked = true;
return ExecScan((ScanState *) node,
(ExecScanAccessMtd) SeqNext,
(ExecScanRecheckMtd) SeqRecheck);

View File

@ -6768,6 +6768,12 @@ transaction_mode_item:
| READ WRITE
{ $$ = makeDefElem("transaction_read_only",
makeIntConst(FALSE, @1)); }
| DEFERRABLE
{ $$ = makeDefElem("transaction_deferrable",
makeIntConst(TRUE, @1)); }
| NOT DEFERRABLE
{ $$ = makeDefElem("transaction_deferrable",
makeIntConst(FALSE, @1)); }
;
/* Syntax with commas is SQL-spec, without commas is Postgres historical */

View File

@ -32,6 +32,7 @@
#include "storage/ipc.h"
#include "storage/pg_shmem.h"
#include "storage/pmsignal.h"
#include "storage/predicate.h"
#include "storage/procarray.h"
#include "storage/procsignal.h"
#include "storage/sinvaladt.h"
@ -105,6 +106,7 @@ CreateSharedMemoryAndSemaphores(bool makePrivate, int port)
sizeof(ShmemIndexEnt)));
size = add_size(size, BufferShmemSize());
size = add_size(size, LockShmemSize());
size = add_size(size, PredicateLockShmemSize());
size = add_size(size, ProcGlobalShmemSize());
size = add_size(size, XLOGShmemSize());
size = add_size(size, CLOGShmemSize());
@ -199,6 +201,11 @@ CreateSharedMemoryAndSemaphores(bool makePrivate, int port)
*/
InitLocks();
/*
* Set up predicate lock manager
*/
InitPredicateLocks();
/*
* Set up process table
*/

View File

@ -198,7 +198,7 @@ ShmemAlloc(Size size)
* Returns TRUE if the pointer points within the shared memory segment.
*/
bool
ShmemAddrIsValid(void *addr)
ShmemAddrIsValid(const void *addr)
{
return (addr >= ShmemBase) && (addr < ShmemEnd);
}

View File

@ -43,14 +43,12 @@ SHMQueueInit(SHM_QUEUE *queue)
* SHMQueueIsDetached -- TRUE if element is not currently
* in a queue.
*/
#ifdef NOT_USED
bool
SHMQueueIsDetached(SHM_QUEUE *queue)
SHMQueueIsDetached(const SHM_QUEUE *queue)
{
Assert(ShmemAddrIsValid(queue));
return (queue->prev == NULL);
}
#endif
/*
* SHMQueueElemInit -- clear an element's links
@ -146,7 +144,7 @@ SHMQueueInsertAfter(SHM_QUEUE *queue, SHM_QUEUE *elem)
*--------------------
*/
Pointer
SHMQueueNext(SHM_QUEUE *queue, SHM_QUEUE *curElem, Size linkOffset)
SHMQueueNext(const SHM_QUEUE *queue, const SHM_QUEUE *curElem, Size linkOffset)
{
SHM_QUEUE *elemPtr = curElem->next;
@ -162,7 +160,7 @@ SHMQueueNext(SHM_QUEUE *queue, SHM_QUEUE *curElem, Size linkOffset)
* SHMQueueEmpty -- TRUE if queue head is only element, FALSE otherwise
*/
bool
SHMQueueEmpty(SHM_QUEUE *queue)
SHMQueueEmpty(const SHM_QUEUE *queue)
{
Assert(ShmemAddrIsValid(queue));

View File

@ -12,7 +12,7 @@ subdir = src/backend/storage/lmgr
top_builddir = ../../../..
include $(top_builddir)/src/Makefile.global
OBJS = lmgr.o lock.o proc.o deadlock.o lwlock.o spin.o s_lock.o
OBJS = lmgr.o lock.o proc.o deadlock.o lwlock.o spin.o s_lock.o predicate.o
include $(top_srcdir)/src/backend/common.mk

View File

@ -3,7 +3,7 @@ src/backend/storage/lmgr/README
Locking Overview
================
Postgres uses three types of interprocess locks:
Postgres uses four types of interprocess locks:
* Spinlocks. These are intended for *very* short-term locks. If a lock
is to be held more than a few dozen instructions, or across any sort of
@ -34,6 +34,8 @@ supports a variety of lock modes with table-driven semantics, and it has
full deadlock detection and automatic release at transaction end.
Regular locks should be used for all user-driven lock requests.
* SIReadLock predicate locks. See separate README-SSI file for details.
Acquisition of either a spinlock or a lightweight lock causes query
cancel and die() interrupts to be held off until all such locks are
released. No such restriction exists for regular locks, however. Also

View File

@ -0,0 +1,537 @@
src/backend/storage/lmgr/README-SSI
Serializable Snapshot Isolation (SSI) and Predicate Locking
===========================================================
This is currently sitting in the lmgr directory because about 90% of
the code is an implementation of predicate locking, which is required
for SSI, rather than being directly related to SSI itself. When
another use for predicate locking justifies the effort to tease these
two things apart, this README file should probably be split.
Credits
-------
This feature was developed by Kevin Grittner and Dan R. K. Ports,
with review and suggestions from Joe Conway, Heikki Linnakangas, and
Jeff Davis. It is based on work published in these papers:
Michael J. Cahill, Uwe Röhm, and Alan D. Fekete. 2008.
Serializable isolation for snapshot databases.
In SIGMOD 08: Proceedings of the 2008 ACM SIGMOD
international conference on Management of data,
pages 729738, New York, NY, USA. ACM.
http://doi.acm.org/10.1145/1376616.1376690
Michael James Cahill. 2009.
Serializable Isolation for Snapshot Databases.
Sydney Digital Theses.
University of Sydney, School of Information Technologies.
http://hdl.handle.net/2123/5353
Overview
--------
With true serializable transactions, if you can show that your
transaction will do the right thing if there are no concurrent
transactions, it will do the right thing in any mix of serializable
transactions or be rolled back with a serialization failure. This
feature has been implemented in PostgreSQL using SSI.
Serializable and Snapshot Transaction Isolation Levels
------------------------------------------------------
Serializable transaction isolation is attractive for shops with
active development by many programmers against a complex schema
because it guarantees data integrity with very little staff time --
if a transaction can be shown to always do the right thing when it is
run alone (before or after any other transaction), it will always do
the right thing in any mix of concurrent serializable transactions.
Where conflicts with other transactions would result in an
inconsistent state within the database, or an inconsistent view of
the data, a serializable transaction will block or roll back to
prevent the anomaly. The SQL standard provides a specific SQLSTATE
for errors generated when a transaction rolls back for this reason,
so that transactions can be retried automatically.
Before version 9.1 PostgreSQL did not support a full serializable
isolation level. A request for serializable transaction isolation
actually provided snapshot isolation. This has well known anomalies
which can allow data corruption or inconsistent views of the data
during concurrent transactions; although these anomalies only occur
when certain patterns of read-write dependencies exist within a set
of concurrent transactions. Where these patterns exist, the anomalies
can be prevented by introducing conflicts through explicitly
programmed locks or otherwise unnecessary writes to the database.
Snapshot isolation is popular because performance is better than
serializable isolation and the integrity guarantees which it does
provide allow anomalies to be avoided or managed with reasonable
effort in many environments.
Serializable Isolation Implementation Strategies
------------------------------------------------
Techniques for implementing full serializable isolation have been
published and in use in many database products for decades. The
primary technique which has been used is Strict 2 Phase Locking
(S2PL), which operates by blocking writes against data which has been
read by concurrent transactions and blocking any access (read or
write) against data which has been written by concurrent
transactions. A cycle in a graph of blocking indicates a deadlock,
requiring a rollback. Blocking and deadlocks under S2PL in high
contention workloads can be debilitating, crippling throughput and
response time.
A new technique for implementing full serializable isolation in an
MVCC database appears in the literature beginning in 2008. This
technique, known as Serializable Snapshot Isolation (SSI) has many of
the advantages of snapshot isolation. In particular, reads don't
block anything and writes don't block reads. Essentially, it runs
snapshot isolation but monitors the read-write conflicts between
transactions to identify dangerous structures in the transaction
graph which indicate that a set of concurrent transactions might
produce an anomaly, and rolls back transactions to ensure that no
anomalies occur. It will produce some false positives (where a
transaction is rolled back even though there would not have been an
anomaly), but will never let an anomaly occur. In the two known
prototype implementations, performance for many workloads (even with
the need to restart transactions which are rolled back) is very close
to snapshot isolation and generally far better than an S2PL
implementation.
Apparent Serial Order of Execution
----------------------------------
One way to understand when snapshot anomalies can occur, and to
visualize the difference between the serializable implementations
described above, is to consider that among transactions executing at
the serializable transaction isolation level, the results are
required to be consistent with some serial (one-at-a-time) execution
of the transactions[1]. How is that order determined in each?
S2PL locks rows used by the transaction in a way which blocks
conflicting access, so that at the moment of a successful commit it
is certain that no conflicting access has occurred. Some transactions
may have blocked, essentially being partially serialized with the
committing transaction, to allow this. Some transactions may have
been rolled back, due to cycles in the blocking. But with S2PL,
transactions can always be viewed as having occurred serially, in the
order of successful commit.
With snapshot isolation, reads never block writes, nor vice versa, so
there is much less actual serialization. The order in which
transactions appear to have executed is determined by something more
subtle than in S2PL: read/write dependencies. If a transaction
attempts to read data which is not visible to it because the
transaction which wrote it (or will later write it) is concurrent
(one of them was running when the other acquired its snapshot), then
the reading transaction appears to have executed first, regardless of
the actual sequence of transaction starts or commits (since it sees a
database state prior to that in which the other transaction leaves
it). If one transaction has both rw-dependencies in (meaning that a
concurrent transaction attempts to read data it writes) and out
(meaning it attempts to read data a concurrent transaction writes),
and a couple other conditions are met, there can appear to be a cycle
in execution order of the transactions. This is when the anomalies
occur.
SSI works by watching for the conditions mentioned above, and rolling
back a transaction when needed to prevent any anomaly. The apparent
order of execution will always be consistent with any actual
serialization (i.e., a transaction which run by itself can always be
considered to have run after any transactions committed before it
started and before any transacton which starts after it commits); but
among concurrent transactions it will appear that the transaction on
the read side of a rw-dependency executed before the transaction on
the write side.
PostgreSQL Implementation
-------------------------
The implementation of serializable transactions for PostgreSQL is
accomplished through Serializable Snapshot Isolation (SSI), based on
the work of Cahill, et al. Fundamentally, this allows snapshot
isolation to run as it has, while monitoring for conditions which
could create a serialization anomaly.
* Since this technique is based on Snapshot Isolation (SI), those
areas in PostgreSQL which don't use SI can't be brought under SSI.
This includes system tables, temporary tables, sequences, hint bit
rewrites, etc. SSI can not eliminate existing anomalies in these
areas.
* Any transaction which is run at a transaction isolation level
other than SERIALIZABLE will not be affected by SSI. If you want to
enforce business rules through SSI, all transactions should be run at
the SERIALIZABLE transaction isolation level, and that should
probably be set as the default.
* If all transactions are run at the SERIALIZABLE transaction
isolation level, business rules can be enforced in triggers or
application code without ever having a need to acquire an explicit
lock or to use SELECT FOR SHARE or SELECT FOR UPDATE.
* Those who want to continue to use snapshot isolation without
the additional protections of SSI (and the associated costs of
enforcing those protections), can use the REPEATABLE READ transaction
isolation level. This level will retain its legacy behavior, which
is identical to the old SERIALIZABLE implementation and fully
consistent with the standard's requirements for the REPEATABLE READ
transaction isolation level.
* Performance under this SSI implementation will be significantly
improved if transactions which don't modify permanent tables are
declared to be READ ONLY before they begin reading data.
* Performance under SSI will tend to degrade more rapidly with a
large number of active database transactions than under less strict
isolation levels. Limiting the number of active transactions through
use of a connection pool or similar techniques may be necessary to
maintain good performance.
* Any transaction which must be rolled back to prevent
serialization anomalies will fail with SQLSTATE 40001, which has a
standard meaning of "serialization failure".
* This SSI implementation makes an effort to choose the
transaction to be cancelled such that an immediate retry of the
transaction will not fail due to conflicts with exactly the same
transactions. Pursuant to this goal, no transaction is cancelled
until one of the other transactions in the set of conflicts which
could generate an anomaly has successfully committed. This is
conceptually similar to how write conflicts are handled. To fully
implement this guarantee there needs to be a way to roll back the
active transaction for another process with a serialization failure
SQLSTATE, even if it is "idle in transaction".
Predicate Locking
-----------------
Both S2PL and SSI require some form of predicate locking to handle
situations where reads conflict with later inserts or with later
updates which move data into the selected range. PostgreSQL didn't
already have predicate locking, so it needed to be added to support
full serializable transactions under either strategy. Practical
implementations of predicate locking generally involve acquiring
locks against data as it is accessed, using multiple granularities
(tuple, page, table, etc.) with escalation as needed to keep the lock
count to a number which can be tracked within RAM structures, and
this was used in PostgreSQL. Coarse granularities can cause some
false positive indications of conflict. The number of false positives
can be influenced by plan choice.
Implementation overview
-----------------------
New RAM structures, inspired by those used to track traditional locks
in PostgreSQL, but tailored to the needs of SIREAD predicate locking,
are used. These refer to physical objects actually accessed in the
course of executing the query, to model the predicates through
inference. Anyone interested in this subject should review the
Hellerstein, Stonebraker and Hamilton paper[2], along with the
locking papers referenced from that and the Cahill papers.
Because the SIREAD locks don't block, traditional locking techniques
were be modified. Intent locking (locking higher level objects
before locking lower level objects) doesn't work with non-blocking
"locks" (which are, in some respects, more like flags than locks).
A configurable amount of shared memory is reserved at postmaster
start-up to track predicate locks. This size cannot be changed
without a restart.
* To prevent resource exhaustion, multiple fine-grained locks may
be promoted to a single coarser-grained lock as needed.
* An attempt to acquire an SIREAD lock on a tuple when the same
transaction already holds an SIREAD lock on the page or the relation
will be ignored. Likewise, an attempt to lock a page when the
relation is locked will be ignored, and the acquisition of a coarser
lock will result in the automatic release of all finer-grained locks
it covers.
Heap locking
------------
Predicate locks will be acquired for the heap based on the following:
* For a table scan, the entire relation will be locked.
* Each tuple read which is visible to the reading transaction
will be locked, whether or not it meets selection criteria; except
that there is no need to acquire an SIREAD lock on a tuple when the
transaction already holds a write lock on any tuple representing the
row, since a rw-dependency would also create a ww-dependency which
has more aggressive enforcement and will thus prevent any anomaly.
Index AM implementations
------------------------
Since predicate locks only exist to detect writes which conflict with
earlier reads, and heap tuple locks are acquired to cover all heap
tuples actually read, including those read through indexes, the index
tuples which were actually scanned are not of interest in themselves;
we only care about their "new neighbors" -- later inserts into the
index which would have been included in the scan had they existed at
the time. Conceptually, we want to lock the gaps between and
surrounding index entries within the scanned range.
Correctness requires that any insert into an index generates a
rw-conflict with a concurrent serializable transaction if, after that
insert, re-execution of any index scan of the other transaction would
access the heap for a row not accessed during the previous execution.
Note that a non-HOT update which expires an old index entry covered
by the scan and adds a new entry for the modified row's new tuple
need not generate a conflict, although an update which "moves" a row
into the scan must generate a conflict. While correctness allows
false positives, they should be minimized for performance reasons.
Several optimizations are possible:
* An index scan which is just finding the right position for an
index insertion or deletion need not acquire a predicate lock.
* An index scan which is comparing for equality on the entire key
for a unique index need not acquire a predicate lock as long as a key
is found corresponding to a visible tuple which has not been modified
by another transaction -- there are no "between or around" gaps to
cover.
* As long as built-in foreign key enforcement continues to use
its current "special tricks" to deal with MVCC issues, predicate
locks should not be needed for scans done by enforcement code.
* If a search determines that no rows can be found regardless of
index contents because the search conditions are contradictory (e.g.,
x = 1 AND x = 2), then no predicate lock is needed.
Other index AM implementation considerations:
* If a btree search discovers that no root page has yet been
created, a predicate lock on the index relation is required;
otherwise btree searches must get to the leaf level to determine
which tuples match, so predicate locks go there.
* GiST searches can determine that there are no matches at any
level of the index, so there must be a predicate lock at each index
level during a GiST search. An index insert at the leaf level can
then be trusted to ripple up to all levels and locations where
conflicting predicate locks may exist.
* The effects of page splits, overflows, consolidations, and
removals must be carefully reviewed to ensure that predicate locks
aren't "lost" during those operations, or kept with pages which could
get re-used for different parts of the index.
Innovations
-----------
The PostgreSQL implementation of Serializable Snapshot Isolation
differs from what is described in the cited papers for several
reasons:
1. PostgreSQL didn't have any existing predicate locking. It had
to be added from scratch.
2. The existing in-memory lock structures were not suitable for
tracking SIREAD locks.
* The database products used for the prototype
implementations for the papers used update-in-place with a rollback
log for their MVCC implementations, while PostgreSQL leaves the old
version of a row in place and adds a new tuple to represent the row
at a new location.
* In PostgreSQL, tuple level locks are not held in RAM for
any length of time; lock information is written to the tuples
involved in the transactions.
* In PostgreSQL, existing lock structures have pointers to
memory which is related to a connection. SIREAD locks need to persist
past the end of the originating transaction and even the connection
which ran it.
* PostgreSQL needs to be able to tolerate a large number of
transactions executing while one long-running transaction stays open
-- the in-RAM techniques discussed in the papers wouldn't support
that.
3. Unlike the database products used for the prototypes described
in the papers, PostgreSQL didn't already have a true serializable
isolation level distinct from snapshot isolation.
4. PostgreSQL supports subtransactions -- an issue not mentioned
in the papers.
5. PostgreSQL doesn't assign a transaction number to a database
transaction until and unless necessary.
6. PostgreSQL has pluggable data types with user-definable
operators, as well as pluggable index types, not all of which are
based around data types which support ordering.
7. Some possible optimizations became apparent during development
and testing.
Differences from the implementation described in the papers are
listed below.
* New structures needed to be created in shared memory to track
the proper information for serializable transactions and their SIREAD
locks.
* Because PostgreSQL does not have the same concept of an "oldest
transaction ID" for all serializable transactions as assumed in the
Cahill these, we track the oldest snapshot xmin among serializable
transactions, and a count of how many active transactions use that
xmin. When the count hits zero we find the new oldest xmin and run a
clean-up based on that.
* Because reads in a subtransaction may cause that subtransaction
to roll back, thereby affecting what is written by the top level
transaction, predicate locks must survive a subtransaction rollback.
As a consequence, all xid usage in SSI, including predicate locking,
is based on the top level xid. When looking at an xid that comes
from a tuple's xmin or xmax, for example, we always call
SubTransGetTopmostTransaction() before doing much else with it.
* Predicate locking in PostgreSQL will start at the tuple level
when possible, with automatic conversion of multiple fine-grained
locks to coarser granularity as need to avoid resource exhaustion.
The amount of memory used for these structures will be configurable,
to balance RAM usage against SIREAD lock granularity.
* A process-local copy of locks held by a process and the coarser
covering locks with counts, are kept to support granularity promotion
decisions with low CPU and locking overhead.
* Conflicts will be identified by looking for predicate locks
when tuples are written and looking at the MVCC information when
tuples are read. There is no matching between two RAM-based locks.
* Because write locks are stored in the heap tuples rather than a
RAM-based lock table, the optimization described in the Cahill thesis
which eliminates an SIREAD lock where there is a write lock is
implemented by the following:
1. When checking a heap write for conflicts against existing
predicate locks, a tuple lock on the tuple being written is removed.
2. When acquiring a predicate lock on a heap tuple, we
return quickly without doing anything if it is a tuple written by the
reading transaction.
* Rather than using conflictIn and conflictOut pointers which use
NULL to indicate no conflict and a self-reference to indicate
multiple conflicts or conflicts with committed transactions, we use a
list of rw-conflicts. With the more complete information, false
positives are reduced and we have sufficient data for more aggressive
clean-up and other optimizations.
o We can avoid ever rolling back a transaction until and
unless there is a pivot where a transaction on the conflict *out*
side of the pivot committed before either of the other transactions.
o We can avoid ever rolling back a transaction when the
transaction on the conflict *in* side of the pivot is explicitly or
implicitly READ ONLY unless the transaction on the conflict *out*
side of the pivot committed before the READ ONLY transaction acquired
its snapshot. (An implicit READ ONLY transaction is one which
committed without writing, even though it was not explicitly declared
to be READ ONLY.)
o We can more aggressively clean up conflicts, predicate
locks, and SSI transaction information.
* Allow a READ ONLY transaction to "opt out" of SSI if there are
no READ WRITE transactions which could cause the READ ONLY
transaction to ever become part of a "dangerous structure" of
overlapping transaction dependencies.
* Allow the user to request that a READ ONLY transaction wait
until the conditions are right for it to start in the "opt out" state
described above. We add a DEFERRABLE state to transactions, which is
specified and maintained in a way similar to READ ONLY. It is
ignored for transactions which are not SERIALIZABLE and READ ONLY.
* When a transaction must be rolled back, we pick among the
active transactions such that an immediate retry will not fail again
on conflicts with the same transactions.
* We use the PostgreSQL SLRU system to hold summarized
information about older committed transactions to put an upper bound
on RAM used. Beyond that limit, information spills to disk.
Performance can degrade in a pessimal situation, but it should be
tolerable, and transactions won't need to be cancelled or blocked
from starting.
R&D Issues
----------
This is intended to be the place to record specific issues which need
more detailed review or analysis.
* WAL file replay. While serializable implementations using S2PL
can guarantee that the write-ahead log contains commits in a sequence
consistent with some serial execution of serializable transactions,
SSI cannot make that guarantee. While the WAL replay is no less
consistent than under snapshot isolation, it is possible that under
PITR recovery or hot standby a database could reach a readable state
where some transactions appear before other transactions which would
have had to precede them to maintain serializable consistency. In
essence, if we do nothing, WAL replay will be at snapshot isolation
even for serializable transactions. Is this OK? If not, how do we
address it?
* External replication. Look at how this impacts external
replication solutions, like Postgres-R, Slony, pgpool, HS/SR, etc.
This is related to the "WAL file replay" issue.
* Weak-memory-ordering machines. Make sure that shared memory
access which involves visibility across multiple transactions uses
locks as needed to avoid problems. On the other hand, ensure that we
really need volatile where we're using it.
http://archives.postgresql.org/pgsql-committers/2008-06/msg00228.php
* UNIQUE btree search for equality on all columns. Since a search
of a UNIQUE index using equality tests on all columns will lock the
heap tuple if an entry is found, it appears that there is no need to
get a predicate lock on the index in that case. A predicate lock is
still needed for such a search if a matching index entry which points
to a visible tuple is not found.
* Planner index probes. To avoid problems with data skew at the
ends of an index which have historically caused bad plans, the
planner now probes the end of an index to see what the maximum or
minimum value is when a query appears to be requesting a range of
data outside what statistics shows is present. These planner checks
don't require predicate locking, but there's currently no easy way to
avoid it. What can we do to avoid predicate locking for such planner
activity?
* Minimize touching of shared memory. Should lists in shared
memory push entries which have just been returned to the front of the
available list, so they will be popped back off soon and some memory
might never be touched, or should we keep adding returned items to
the end of the available list?
Footnotes
---------
[1] http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt
Search for serial execution to find the relevant section.
[2] http://db.cs.berkeley.edu/papers/fntdb07-architecture.pdf
Joseph M. Hellerstein, Michael Stonebraker and James Hamilton. 2007.
Architecture of a Database System. Foundations and Trends(R) in
Databases Vol. 1, No. 2 (2007) 141259.
Of particular interest:
* 6.1 A Note on ACID
* 6.2 A Brief Review of Serializability
* 6.3 Locking and Latching
* 6.3.1 Transaction Isolation Levels
* 6.5.3 Next-Key Locking: Physical Surrogates for Logical

View File

@ -28,6 +28,7 @@
#include "miscadmin.h"
#include "pg_trace.h"
#include "storage/ipc.h"
#include "storage/predicate.h"
#include "storage/proc.h"
#include "storage/spin.h"
@ -178,6 +179,9 @@ NumLWLocks(void)
/* async.c needs one per Async buffer */
numLocks += NUM_ASYNC_BUFFERS;
/* predicate.c needs one per old serializable xid buffer */
numLocks += NUM_OLDSERXID_BUFFERS;
/*
* Add any requested by loadable modules; for backwards-compatibility
* reasons, allocate at least NUM_USER_DEFINED_LWLOCKS of them even if

File diff suppressed because it is too large Load Diff

View File

@ -374,6 +374,10 @@ standard_ProcessUtility(Node *parsetree,
SetPGVariable("transaction_read_only",
list_make1(item->arg),
true);
else if (strcmp(item->defname, "transaction_deferrable") == 0)
SetPGVariable("transaction_deferrable",
list_make1(item->arg),
true);
}
}
break;

View File

@ -15,6 +15,7 @@
#include "catalog/pg_type.h"
#include "funcapi.h"
#include "miscadmin.h"
#include "storage/predicate_internals.h"
#include "storage/proc.h"
#include "utils/builtins.h"
@ -32,11 +33,20 @@ static const char *const LockTagTypeNames[] = {
"advisory"
};
/* This must match enum PredicateLockTargetType (predicate_internals.h) */
static const char *const PredicateLockTagTypeNames[] = {
"relation",
"page",
"tuple"
};
/* Working status for pg_lock_status */
typedef struct
{
LockData *lockData; /* state data from lmgr */
int currIdx; /* current PROCLOCK index */
PredicateLockData *predLockData; /* state data for pred locks */
int predLockIdx; /* current index for pred lock */
} PG_Lock_Status;
@ -69,6 +79,7 @@ pg_lock_status(PG_FUNCTION_ARGS)
FuncCallContext *funcctx;
PG_Lock_Status *mystatus;
LockData *lockData;
PredicateLockData *predLockData;
if (SRF_IS_FIRSTCALL())
{
@ -126,6 +137,8 @@ pg_lock_status(PG_FUNCTION_ARGS)
mystatus->lockData = GetLockStatusData();
mystatus->currIdx = 0;
mystatus->predLockData = GetPredicateLockStatusData();
mystatus->predLockIdx = 0;
MemoryContextSwitchTo(oldcontext);
}
@ -303,6 +316,72 @@ pg_lock_status(PG_FUNCTION_ARGS)
SRF_RETURN_NEXT(funcctx, result);
}
/*
* Have returned all regular locks. Now start on the SIREAD predicate
* locks.
*/
predLockData = mystatus->predLockData;
if (mystatus->predLockIdx < predLockData->nelements)
{
PredicateLockTargetType lockType;
PREDICATELOCKTARGETTAG *predTag = &(predLockData->locktags[mystatus->predLockIdx]);
SERIALIZABLEXACT *xact = &(predLockData->xacts[mystatus->predLockIdx]);
Datum values[14];
bool nulls[14];
HeapTuple tuple;
Datum result;
mystatus->predLockIdx++;
/*
* Form tuple with appropriate data.
*/
MemSet(values, 0, sizeof(values));
MemSet(nulls, false, sizeof(nulls));
/* lock type */
lockType = GET_PREDICATELOCKTARGETTAG_TYPE(*predTag);
values[0] = CStringGetTextDatum(PredicateLockTagTypeNames[lockType]);
/* lock target */
values[1] = GET_PREDICATELOCKTARGETTAG_DB(*predTag);
values[2] = GET_PREDICATELOCKTARGETTAG_RELATION(*predTag);
if (lockType == PREDLOCKTAG_TUPLE)
values[4] = GET_PREDICATELOCKTARGETTAG_OFFSET(*predTag);
else
nulls[4] = true;
if ((lockType == PREDLOCKTAG_TUPLE) ||
(lockType == PREDLOCKTAG_PAGE))
values[3] = GET_PREDICATELOCKTARGETTAG_PAGE(*predTag);
else
nulls[3] = true;
/* these fields are targets for other types of locks */
nulls[5] = true; /* virtualxid */
nulls[6] = true; /* transactionid */
nulls[7] = true; /* classid */
nulls[8] = true; /* objid */
nulls[9] = true; /* objsubid */
/* lock holder */
values[10] = VXIDGetDatum(xact->vxid.backendId,
xact->vxid.localTransactionId);
nulls[11] = true; /* pid */
/*
* Lock mode. Currently all predicate locks are SIReadLocks, which are
* always held (never waiting)
*/
values[12] = CStringGetTextDatum("SIReadLock");
values[13] = BoolGetDatum(true);
tuple = heap_form_tuple(funcctx->tuple_desc, values, nulls);
result = HeapTupleGetDatum(tuple);
SRF_RETURN_NEXT(funcctx, result);
}
SRF_RETURN_DONE(funcctx);
}

View File

@ -59,6 +59,7 @@
#include "storage/bufmgr.h"
#include "storage/standby.h"
#include "storage/fd.h"
#include "storage/predicate.h"
#include "tcop/tcopprot.h"
#include "tsearch/ts_cache.h"
#include "utils/builtins.h"
@ -1096,6 +1097,23 @@ static struct config_bool ConfigureNamesBool[] =
&XactReadOnly,
false, assign_transaction_read_only, NULL
},
{
{"default_transaction_deferrable", PGC_USERSET, CLIENT_CONN_STATEMENT,
gettext_noop("Sets the default deferrable status of new transactions."),
NULL
},
&DefaultXactDeferrable,
false, NULL, NULL
},
{
{"transaction_deferrable", PGC_USERSET, CLIENT_CONN_STATEMENT,
gettext_noop("Whether to defer a read-only serializable transaction until it can be executed with no possible serialization failures."),
NULL,
GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE
},
&XactDeferrable,
false, assign_transaction_deferrable, NULL
},
{
{"check_function_bodies", PGC_USERSET, CLIENT_CONN_STATEMENT,
gettext_noop("Check function bodies during CREATE FUNCTION."),
@ -1695,6 +1713,17 @@ static struct config_int ConfigureNamesInt[] =
64, 10, INT_MAX, NULL, NULL
},
{
{"max_predicate_locks_per_transaction", PGC_POSTMASTER, LOCK_MANAGEMENT,
gettext_noop("Sets the maximum number of predicate locks per transaction."),
gettext_noop("The shared predicate lock table is sized on the assumption that "
"at most max_predicate_locks_per_transaction * max_connections distinct "
"objects will need to be locked at any one time.")
},
&max_predicate_locks_per_xact,
64, 10, INT_MAX, NULL, NULL
},
{
{"authentication_timeout", PGC_SIGHUP, CONN_AUTH_SECURITY,
gettext_noop("Sets the maximum allowed time to complete client authentication."),
@ -3460,6 +3489,8 @@ InitializeGUCOptions(void)
PGC_POSTMASTER, PGC_S_OVERRIDE);
SetConfigOption("transaction_read_only", "no",
PGC_POSTMASTER, PGC_S_OVERRIDE);
SetConfigOption("transaction_deferrable", "no",
PGC_POSTMASTER, PGC_S_OVERRIDE);
/*
* For historical reasons, some GUC parameters can receive defaults from
@ -5699,6 +5730,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt)
else if (strcmp(item->defname, "transaction_read_only") == 0)
SetPGVariable("transaction_read_only",
list_make1(item->arg), stmt->is_local);
else if (strcmp(item->defname, "transaction_deferrable") == 0)
SetPGVariable("transaction_deferrable",
list_make1(item->arg), stmt->is_local);
else
elog(ERROR, "unexpected SET TRANSACTION element: %s",
item->defname);
@ -5718,6 +5752,9 @@ ExecSetVariableStmt(VariableSetStmt *stmt)
else if (strcmp(item->defname, "transaction_read_only") == 0)
SetPGVariable("default_transaction_read_only",
list_make1(item->arg), stmt->is_local);
else if (strcmp(item->defname, "transaction_deferrable") == 0)
SetPGVariable("default_transaction_deferrable",
list_make1(item->arg), stmt->is_local);
else
elog(ERROR, "unexpected SET SESSION element: %s",
item->defname);

View File

@ -450,6 +450,7 @@
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
@ -501,7 +502,8 @@
# Note: Each lock table slot uses ~270 bytes of shared memory, and there are
# max_locks_per_transaction * (max_connections + max_prepared_transactions)
# lock table slots.
#max_predicate_locks_per_transaction = 64 # min 10
# (change requires restart)
#------------------------------------------------------------------------------
# VERSION/PLATFORM COMPATIBILITY

View File

@ -22,6 +22,7 @@
#include "access/hash.h"
#include "storage/bufmgr.h"
#include "storage/predicate.h"
#include "storage/proc.h"
#include "utils/memutils.h"
#include "utils/rel.h"
@ -261,7 +262,10 @@ ResourceOwnerReleaseInternal(ResourceOwner owner,
* the top of the recursion.
*/
if (owner == TopTransactionResourceOwner)
{
ProcReleaseLocks(isCommit);
ReleasePredicateLocks(isCommit);
}
}
else
{

View File

@ -27,6 +27,7 @@
#include "access/transam.h"
#include "access/xact.h"
#include "storage/predicate.h"
#include "storage/proc.h"
#include "storage/procarray.h"
#include "utils/memutils.h"
@ -126,9 +127,6 @@ GetTransactionSnapshot(void)
{
Assert(RegisteredSnapshots == 0);
CurrentSnapshot = GetSnapshotData(&CurrentSnapshotData);
FirstSnapshotSet = true;
/*
* In transaction-snapshot mode, the first snapshot must live until
* end of xact regardless of what the caller does with it, so we must
@ -136,11 +134,20 @@ GetTransactionSnapshot(void)
*/
if (IsolationUsesXactSnapshot())
{
CurrentSnapshot = RegisterSnapshotOnOwner(CurrentSnapshot,
if (IsolationIsSerializable())
CurrentSnapshot = RegisterSerializableTransaction(&CurrentSnapshotData);
else
{
CurrentSnapshot = GetSnapshotData(&CurrentSnapshotData);
CurrentSnapshot = RegisterSnapshotOnOwner(CurrentSnapshot,
TopTransactionResourceOwner);
}
registered_xact_snapshot = true;
}
else
CurrentSnapshot = GetSnapshotData(&CurrentSnapshotData);
FirstSnapshotSet = true;
return CurrentSnapshot;
}

View File

@ -2299,6 +2299,7 @@ main(int argc, char *argv[])
"pg_xlog/archive_status",
"pg_clog",
"pg_notify",
"pg_serial",
"pg_subtrans",
"pg_twophase",
"pg_multixact/members",

View File

@ -11,14 +11,14 @@
* script that reproduces the schema in terms of SQL that is understood
* by PostgreSQL
*
* Note that pg_dump runs in a serializable transaction, so it sees a
* consistent snapshot of the database including system catalogs.
* However, it relies in part on various specialized backend functions
* like pg_get_indexdef(), and those things tend to run on SnapshotNow
* time, ie they look at the currently committed state. So it is
* possible to get 'cache lookup failed' error if someone performs DDL
* changes while a dump is happening. The window for this sort of thing
* is from the beginning of the serializable transaction to
* Note that pg_dump runs in a transaction-snapshot mode transaction,
* so it sees a consistent snapshot of the database including system
* catalogs. However, it relies in part on various specialized backend
* functions like pg_get_indexdef(), and those things tend to run on
* SnapshotNow time, ie they look at the currently committed state. So
* it is possible to get 'cache lookup failed' error if someone
* performs DDL changes while a dump is happening. The window for this
* sort of thing is from the acquisition of the transaction snapshot to
* getSchemaData() (when pg_dump acquires AccessShareLock on every
* table it intends to dump). It isn't very large, but it can happen.
*
@ -135,6 +135,7 @@ static int dump_inserts = 0;
static int column_inserts = 0;
static int no_security_label = 0;
static int no_unlogged_table_data = 0;
static int serializable_deferrable = 0;
static void help(const char *progname);
@ -318,6 +319,7 @@ main(int argc, char **argv)
{"no-tablespaces", no_argument, &outputNoTablespaces, 1},
{"quote-all-identifiers", no_argument, &quote_all_identifiers, 1},
{"role", required_argument, NULL, 3},
{"serializable-deferrable", no_argument, &serializable_deferrable, 1},
{"use-set-session-authorization", no_argument, &use_setsessauth, 1},
{"no-security-label", no_argument, &no_security_label, 1},
{"no-unlogged-table-data", no_argument, &no_unlogged_table_data, 1},
@ -669,11 +671,21 @@ main(int argc, char **argv)
no_security_label = 1;
/*
* Start serializable transaction to dump consistent data.
* Start transaction-snapshot mode transaction to dump consistent data.
*/
do_sql_command(g_conn, "BEGIN");
do_sql_command(g_conn, "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE");
if (g_fout->remoteVersion >= 90100)
{
if (serializable_deferrable)
do_sql_command(g_conn,
"SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, "
"READ ONLY, DEFERRABLE");
else
do_sql_command(g_conn,
"SET TRANSACTION ISOLATION LEVEL REPEATABLE READ");
}
else
do_sql_command(g_conn, "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE");
/* Select the appropriate subquery to convert user IDs to names */
if (g_fout->remoteVersion >= 80100)
@ -864,6 +876,7 @@ help(const char *progname)
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --no-tablespaces do not dump tablespace assignments\n"));
printf(_(" --quote-all-identifiers quote all identifiers, even if not keywords\n"));
printf(_(" --serializable-deferrable wait until the dump can run without anomalies\n"));
printf(_(" --role=ROLENAME do SET ROLE before dump\n"));
printf(_(" --no-security-label do not dump security label assignments\n"));
printf(_(" --no-unlogged-table-data do not dump unlogged table data\n"));

View File

@ -82,8 +82,8 @@ extern HeapTuple heap_getnext(HeapScanDesc scan, ScanDirection direction);
extern bool heap_fetch(Relation relation, Snapshot snapshot,
HeapTuple tuple, Buffer *userbuf, bool keep_buf,
Relation stats_relation);
extern bool heap_hot_search_buffer(ItemPointer tid, Buffer buffer,
Snapshot snapshot, bool *all_dead);
extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
Buffer buffer, Snapshot snapshot, bool *all_dead);
extern bool heap_hot_search(ItemPointer tid, Relation relation,
Snapshot snapshot, bool *all_dead);

View File

@ -35,6 +35,7 @@ typedef struct HeapScanDescData
BlockNumber rs_startblock; /* block # to start at */
BufferAccessStrategy rs_strategy; /* access strategy for reads */
bool rs_syncscan; /* report location to syncscan logic? */
bool rs_relpredicatelocked; /* predicate lock on relation exists */
/* scan current state */
bool rs_inited; /* false = scan not init'd yet */

View File

@ -23,8 +23,9 @@ typedef uint8 TwoPhaseRmgrId;
*/
#define TWOPHASE_RM_END_ID 0
#define TWOPHASE_RM_LOCK_ID 1
#define TWOPHASE_RM_PGSTAT_ID 2
#define TWOPHASE_RM_MULTIXACT_ID 3
#define TWOPHASE_RM_PREDICATELOCK_ID 2
#define TWOPHASE_RM_PGSTAT_ID 3
#define TWOPHASE_RM_MULTIXACT_ID 4
#define TWOPHASE_RM_MAX_ID TWOPHASE_RM_MULTIXACT_ID
extern const TwoPhaseCallback twophase_recover_callbacks[];

View File

@ -32,15 +32,26 @@ extern int DefaultXactIsoLevel;
extern int XactIsoLevel;
/*
* We only implement two isolation levels internally. This macro should
* be used to check which one is selected.
* We implement three isolation levels internally.
* The two stronger ones use one snapshot per database transaction;
* the others use one snapshot per statement.
* Serializable uses predicate locks in addition to snapshots.
* These macros should be used to check which isolation level is selected.
*/
#define IsolationUsesXactSnapshot() (XactIsoLevel >= XACT_REPEATABLE_READ)
#define IsolationIsSerializable() (XactIsoLevel == XACT_SERIALIZABLE)
/* Xact read-only state */
extern bool DefaultXactReadOnly;
extern bool XactReadOnly;
/*
* Xact is deferrable -- only meaningful (currently) for read only
* SERIALIZABLE transactions
*/
extern bool DefaultXactDeferrable;
extern bool XactDeferrable;
/* Asynchronous commits */
extern bool XactSyncCommit;

View File

@ -49,6 +49,7 @@ CATALOG(pg_am,2601)
bool amsearchnulls; /* can AM search for NULL/NOT NULL entries? */
bool amstorage; /* can storage type differ from column type? */
bool amclusterable; /* does AM support cluster command? */
bool ampredlocks; /* does AM handle predicate locks? */
Oid amkeytype; /* type of data in index, or InvalidOid */
regproc aminsert; /* "insert this tuple" function */
regproc ambeginscan; /* "prepare for index scan" function */
@ -77,7 +78,7 @@ typedef FormData_pg_am *Form_pg_am;
* compiler constants for pg_am
* ----------------
*/
#define Natts_pg_am 27
#define Natts_pg_am 28
#define Anum_pg_am_amname 1
#define Anum_pg_am_amstrategies 2
#define Anum_pg_am_amsupport 3
@ -90,37 +91,38 @@ typedef FormData_pg_am *Form_pg_am;
#define Anum_pg_am_amsearchnulls 10
#define Anum_pg_am_amstorage 11
#define Anum_pg_am_amclusterable 12
#define Anum_pg_am_amkeytype 13
#define Anum_pg_am_aminsert 14
#define Anum_pg_am_ambeginscan 15
#define Anum_pg_am_amgettuple 16
#define Anum_pg_am_amgetbitmap 17
#define Anum_pg_am_amrescan 18
#define Anum_pg_am_amendscan 19
#define Anum_pg_am_ammarkpos 20
#define Anum_pg_am_amrestrpos 21
#define Anum_pg_am_ambuild 22
#define Anum_pg_am_ambuildempty 23
#define Anum_pg_am_ambulkdelete 24
#define Anum_pg_am_amvacuumcleanup 25
#define Anum_pg_am_amcostestimate 26
#define Anum_pg_am_amoptions 27
#define Anum_pg_am_ampredlocks 13
#define Anum_pg_am_amkeytype 14
#define Anum_pg_am_aminsert 15
#define Anum_pg_am_ambeginscan 16
#define Anum_pg_am_amgettuple 17
#define Anum_pg_am_amgetbitmap 18
#define Anum_pg_am_amrescan 19
#define Anum_pg_am_amendscan 20
#define Anum_pg_am_ammarkpos 21
#define Anum_pg_am_amrestrpos 22
#define Anum_pg_am_ambuild 23
#define Anum_pg_am_ambuildempty 24
#define Anum_pg_am_ambulkdelete 25
#define Anum_pg_am_amvacuumcleanup 26
#define Anum_pg_am_amcostestimate 27
#define Anum_pg_am_amoptions 28
/* ----------------
* initial contents of pg_am
* ----------------
*/
DATA(insert OID = 403 ( btree 5 1 t f t t t t t f t 0 btinsert btbeginscan btgettuple btgetbitmap btrescan btendscan btmarkpos btrestrpos btbuild btbuildempty btbulkdelete btvacuumcleanup btcostestimate btoptions ));
DATA(insert OID = 403 ( btree 5 1 t f t t t t t f t t 0 btinsert btbeginscan btgettuple btgetbitmap btrescan btendscan btmarkpos btrestrpos btbuild btbuildempty btbulkdelete btvacuumcleanup btcostestimate btoptions ));
DESCR("b-tree index access method");
#define BTREE_AM_OID 403
DATA(insert OID = 405 ( hash 1 1 f f t f f f f f f 23 hashinsert hashbeginscan hashgettuple hashgetbitmap hashrescan hashendscan hashmarkpos hashrestrpos hashbuild hashbuildempty hashbulkdelete hashvacuumcleanup hashcostestimate hashoptions ));
DATA(insert OID = 405 ( hash 1 1 f f t f f f f f f f 23 hashinsert hashbeginscan hashgettuple hashgetbitmap hashrescan hashendscan hashmarkpos hashrestrpos hashbuild hashbuildempty hashbulkdelete hashvacuumcleanup hashcostestimate hashoptions ));
DESCR("hash index access method");
#define HASH_AM_OID 405
DATA(insert OID = 783 ( gist 0 8 f t f f t t t t t 0 gistinsert gistbeginscan gistgettuple gistgetbitmap gistrescan gistendscan gistmarkpos gistrestrpos gistbuild gistbuildempty gistbulkdelete gistvacuumcleanup gistcostestimate gistoptions ));
DATA(insert OID = 783 ( gist 0 8 f t f f t t t t t f 0 gistinsert gistbeginscan gistgettuple gistgetbitmap gistrescan gistendscan gistmarkpos gistrestrpos gistbuild gistbuildempty gistbulkdelete gistvacuumcleanup gistcostestimate gistoptions ));
DESCR("GiST index access method");
#define GIST_AM_OID 783
DATA(insert OID = 2742 ( gin 0 5 f f f f t t f t f 0 gininsert ginbeginscan - gingetbitmap ginrescan ginendscan ginmarkpos ginrestrpos ginbuild ginbuildempty ginbulkdelete ginvacuumcleanup gincostestimate ginoptions ));
DATA(insert OID = 2742 ( gin 0 5 f f f f t t f t f f 0 gininsert ginbeginscan - gingetbitmap ginrescan ginendscan ginmarkpos ginrestrpos ginbuild ginbuildempty ginbulkdelete ginvacuumcleanup gincostestimate ginoptions ));
DESCR("GIN index access method");
#define GIN_AM_OID 2742

View File

@ -26,6 +26,8 @@ extern bool assign_transaction_read_only(bool value,
extern const char *assign_XactIsoLevel(const char *value,
bool doit, GucSource source);
extern const char *show_XactIsoLevel(void);
extern bool assign_transaction_deferrable(bool newval, bool doit,
GucSource source);
extern bool assign_random_seed(double value,
bool doit, GucSource source);
extern const char *show_random_seed(void);

View File

@ -27,6 +27,10 @@
#define LOG2_NUM_LOCK_PARTITIONS 4
#define NUM_LOCK_PARTITIONS (1 << LOG2_NUM_LOCK_PARTITIONS)
/* Number of partitions the shared predicate lock tables are divided into */
#define LOG2_NUM_PREDICATELOCK_PARTITIONS 4
#define NUM_PREDICATELOCK_PARTITIONS (1 << LOG2_NUM_PREDICATELOCK_PARTITIONS)
/*
* We have a number of predefined LWLocks, plus a bunch of LWLocks that are
* dynamically assigned (e.g., for shared buffers). The LWLock structures
@ -70,12 +74,18 @@ typedef enum LWLockId
RelationMappingLock,
AsyncCtlLock,
AsyncQueueLock,
SerializableXactHashLock,
SerializableFinishedListLock,
SerializablePredicateLockListLock,
OldSerXidLock,
PredicateLockNextRowLinkLock,
/* Individual lock IDs end here */
FirstBufMappingLock,
FirstLockMgrLock = FirstBufMappingLock + NUM_BUFFER_PARTITIONS,
FirstPredicateLockMgrLock = FirstLockMgrLock + NUM_LOCK_PARTITIONS,
/* must be last except for MaxDynamicLWLock: */
NumFixedLWLocks = FirstLockMgrLock + NUM_LOCK_PARTITIONS,
NumFixedLWLocks = FirstPredicateLockMgrLock + NUM_PREDICATELOCK_PARTITIONS,
MaxDynamicLWLock = 1000000000
} LWLockId;

View File

@ -0,0 +1,67 @@
/*-------------------------------------------------------------------------
*
* predicate.h
* POSTGRES public predicate locking definitions.
*
*
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* src/include/storage/predicate.h
*
*-------------------------------------------------------------------------
*/
#ifndef PREDICATE_H
#define PREDICATE_H
#include "utils/relcache.h"
#include "utils/snapshot.h"
/*
* GUC variables
*/
extern int max_predicate_locks_per_xact;
/* Number of SLRU buffers to use for predicate locking */
#define NUM_OLDSERXID_BUFFERS 16
/*
* function prototypes
*/
/* housekeeping for shared memory predicate lock structures */
extern void InitPredicateLocks(void);
extern Size PredicateLockShmemSize(void);
/* predicate lock reporting */
extern bool PageIsPredicateLocked(const Relation relation, const BlockNumber blkno);
/* predicate lock maintenance */
extern Snapshot RegisterSerializableTransaction(Snapshot snapshot);
extern void RegisterPredicateLockingXid(const TransactionId xid);
extern void PredicateLockRelation(const Relation relation);
extern void PredicateLockPage(const Relation relation, const BlockNumber blkno);
extern void PredicateLockTuple(const Relation relation, const HeapTuple tuple);
extern void PredicateLockTupleRowVersionLink(const Relation relation, const HeapTuple oldTuple, const HeapTuple newTuple);
extern void PredicateLockPageSplit(const Relation relation, const BlockNumber oldblkno, const BlockNumber newblkno);
extern void PredicateLockPageCombine(const Relation relation, const BlockNumber oldblkno, const BlockNumber newblkno);
extern void ReleasePredicateLocks(const bool isCommit);
/* conflict detection (may also trigger rollback) */
extern void CheckForSerializableConflictOut(const bool valid, const Relation relation, const HeapTuple tuple, const Buffer buffer);
extern void CheckForSerializableConflictIn(const Relation relation, const HeapTuple tuple, const Buffer buffer);
/* final rollback checking */
extern void PreCommit_CheckForSerializationFailure(void);
/* two-phase commit support */
extern void AtPrepare_PredicateLocks(void);
extern void PostPrepare_PredicateLocks(TransactionId xid);
extern void PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit);
extern void predicatelock_twophase_recover(TransactionId xid, uint16 info,
void *recdata, uint32 len);
#endif /* PREDICATE_H */

View File

@ -0,0 +1,476 @@
/*-------------------------------------------------------------------------
*
* predicate_internals.h
* POSTGRES internal predicate locking definitions.
*
*
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* src/include/storage/predicate_internals.h
*
*-------------------------------------------------------------------------
*/
#ifndef PREDICATE_INTERNALS_H
#define PREDICATE_INTERNALS_H
#include "storage/lock.h"
/*
* Commit number.
*/
typedef uint64 SerCommitSeqNo;
/*
* Reserved commit sequence numbers:
* - 0 is reserved to indicate a non-existent SLRU entry; it cannot be
* used as a SerCommitSeqNo, even an invalid one
* - InvalidSerCommitSeqNo is used to indicate a transaction that
* hasn't committed yet, so use a number greater than all valid
* ones to make comparison do the expected thing
* - RecoverySerCommitSeqNo is used to refer to transactions that
* happened before a crash/recovery, since we restart the sequence
* at that point. It's earlier than all normal sequence numbers,
* and is only used by recovered prepared transactions
*/
#define InvalidSerCommitSeqNo UINT64_MAX
#define RecoverySerCommitSeqNo ((SerCommitSeqNo) 1)
#define FirstNormalSerCommitSeqNo ((SerCommitSeqNo) 2)
/*
* The SERIALIZABLEXACT struct contains information needed for each
* serializable database transaction to support SSI techniques.
*
* A home-grown list is maintained in shared memory to manage these.
* An entry is used when the serializable transaction acquires a snapshot.
* Unless the transaction is rolled back, this entry must generally remain
* until all concurrent transactions have completed. (There are special
* optimizations for READ ONLY transactions which often allow them to be
* cleaned up earlier.) A transaction which is rolled back is cleaned up
* as soon as possible.
*
* Eligibility for cleanup of committed transactions is generally determined
* by comparing the transaction's finishedBefore field to
* SerializableGlobalXmin.
*/
typedef struct SERIALIZABLEXACT
{
VirtualTransactionId vxid; /* The executing process always has one of
* these. */
SerCommitSeqNo commitSeqNo;
union /* these values are not both interesting at
* the same time */
{
SerCommitSeqNo earliestOutConflictCommit; /* when committed with
* conflict out */
SerCommitSeqNo lastCommitBeforeSnapshot; /* when not committed or
* no conflict out */
} SeqNo;
SHM_QUEUE outConflicts; /* list of write transactions whose data we
* couldn't read. */
SHM_QUEUE inConflicts; /* list of read transactions which couldn't
* see our write. */
SHM_QUEUE predicateLocks; /* list of associated PREDICATELOCK objects */
SHM_QUEUE finishedLink; /* list link in
* FinishedSerializableTransactions */
/*
* for r/o transactions: list of concurrent r/w transactions that we could
* potentially have conflicts with, and vice versa for r/w transactions
*/
SHM_QUEUE possibleUnsafeConflicts;
TransactionId topXid; /* top level xid for the transaction, if one
* exists; else invalid */
TransactionId finishedBefore; /* invalid means still running; else
* the struct expires when no
* serializable xids are before this. */
TransactionId xmin; /* the transaction's snapshot xmin */
uint32 flags; /* OR'd combination of values defined below */
int pid; /* pid of associated process */
} SERIALIZABLEXACT;
#define SXACT_FLAG_ROLLED_BACK 0x00000001
#define SXACT_FLAG_COMMITTED 0x00000002
#define SXACT_FLAG_CONFLICT_OUT 0x00000004
#define SXACT_FLAG_READ_ONLY 0x00000008
#define SXACT_FLAG_DID_WRITE 0x00000010
#define SXACT_FLAG_MARKED_FOR_DEATH 0x00000020
#define SXACT_FLAG_DEFERRABLE_WAITING 0x00000040
#define SXACT_FLAG_RO_SAFE 0x00000080
#define SXACT_FLAG_RO_UNSAFE 0x00000100
#define SXACT_FLAG_SUMMARY_CONFLICT_IN 0x00000200
#define SXACT_FLAG_SUMMARY_CONFLICT_OUT 0x00000400
#define SXACT_FLAG_PREPARED 0x00000800
/*
* The following types are used to provide an ad hoc list for holding
* SERIALIZABLEXACT objects. An HTAB is overkill, since there is no need to
* access these by key -- there are direct pointers to these objects where
* needed. If a shared memory list is created, these types can probably be
* eliminated in favor of using the general solution.
*/
typedef struct PredXactListElementData
{
SHM_QUEUE link;
SERIALIZABLEXACT sxact;
} PredXactListElementData;
typedef struct PredXactListElementData *PredXactListElement;
#define PredXactListElementDataSize \
((Size)MAXALIGN(sizeof(PredXactListElementData)))
typedef struct PredXactListData
{
SHM_QUEUE availableList;
SHM_QUEUE activeList;
/*
* These global variables are maintained when registering and cleaning up
* serializable transactions. They must be global across all backends,
* but are not needed outside the predicate.c source file.
*/
TransactionId SxactGlobalXmin; /* global xmin for active serializable
* transactions */
int SxactGlobalXminCount; /* how many active serializable
* transactions have this xmin */
int WritableSxactCount; /* how many non-read-only serializable
* transactions are active */
SerCommitSeqNo LastSxactCommitSeqNo; /* a strictly monotonically
* increasing number for
* commits of serializable
* transactions */
/* Protected by SerializableXactHashLock. */
SerCommitSeqNo CanPartialClearThrough; /* can clear predicate locks
* and inConflicts for
* committed transactions
* through this seq no */
/* Protected by SerializableFinishedListLock. */
SerCommitSeqNo HavePartialClearedThrough; /* have cleared through this
* seq no */
SERIALIZABLEXACT *OldCommittedSxact; /* shared copy of dummy sxact */
bool NeedTargetLinkCleanup; /* to save cleanup effort for rare
* case */
PredXactListElement element;
} PredXactListData;
typedef struct PredXactListData *PredXactList;
#define PredXactListDataSize \
((Size)MAXALIGN(sizeof(PredXactListData)))
/*
* The following types are used to provide lists of rw-conflicts between
* pairs of transactions. Since exactly the same information is needed,
* they are also used to record possible unsafe transaction relationships
* for purposes of identifying safe snapshots for read-only transactions.
*
* When a RWConflictData is not in use to record either type of relationship
* between a pair of transactions, it is kept on an "available" list. The
* outLink field is used for maintaining that list.
*/
typedef struct RWConflictData
{
SHM_QUEUE outLink; /* link for list of conflicts out from a sxact */
SHM_QUEUE inLink; /* link for list of conflicts in to a sxact */
SERIALIZABLEXACT *sxactOut;
SERIALIZABLEXACT *sxactIn;
} RWConflictData;
typedef struct RWConflictData *RWConflict;
#define RWConflictDataSize \
((Size)MAXALIGN(sizeof(RWConflictData)))
typedef struct RWConflictPoolHeaderData
{
SHM_QUEUE availableList;
RWConflict element;
} RWConflictPoolHeaderData;
typedef struct RWConflictPoolHeaderData *RWConflictPoolHeader;
#define RWConflictPoolHeaderDataSize \
((Size)MAXALIGN(sizeof(RWConflictPoolHeaderData)))
/*
* The SERIALIZABLEXIDTAG struct identifies an xid assigned to a serializable
* transaction or any of its subtransactions.
*/
typedef struct SERIALIZABLEXIDTAG
{
TransactionId xid;
} SERIALIZABLEXIDTAG;
/*
* The SERIALIZABLEXID struct provides a link from a TransactionId for a
* serializable transaction to the related SERIALIZABLEXACT record, even if
* the transaction has completed and its connection has been closed.
*
* These are created as new top level transaction IDs are first assigned to
* transactions which are participating in predicate locking. This may
* never happen for a particular transaction if it doesn't write anything.
* They are removed with their related serializable transaction objects.
*
* The SubTransGetTopmostTransaction method is used where necessary to get
* from an XID which might be from a subtransaction to the top level XID.
*/
typedef struct SERIALIZABLEXID
{
/* hash key */
SERIALIZABLEXIDTAG tag;
/* data */
SERIALIZABLEXACT *myXact; /* pointer to the top level transaction data */
} SERIALIZABLEXID;
/*
* The PREDICATELOCKTARGETTAG struct identifies a database object which can
* be the target of predicate locks. It is designed to fit into 16 bytes
* with no padding. Note that this would need adjustment if we widen Oid or
* BlockNumber to more than 32 bits.
*
* TODO SSI: If we always use the same fields for the same type of value, we
* should rename these. Holding off until it's clear there are no exceptions.
* Since indexes are relations with blocks and tuples, it's looking likely that
* the rename will be possible. If not, we may need to divide the last field
* and use part of it for a target type, so that we know how to interpret the
* data..
*/
typedef struct PREDICATELOCKTARGETTAG
{
uint32 locktag_field1; /* a 32-bit ID field */
uint32 locktag_field2; /* a 32-bit ID field */
uint32 locktag_field3; /* a 32-bit ID field */
uint16 locktag_field4; /* a 16-bit ID field */
uint16 locktag_field5; /* a 16-bit ID field */
} PREDICATELOCKTARGETTAG;
/*
* The PREDICATELOCKTARGET struct represents a database object on which there
* are predicate locks.
*
* A hash list of these objects is maintained in shared memory. An entry is
* added when a predicate lock is requested on an object which doesn't
* already have one. An entry is removed when the last lock is removed from
* its list.
*
* Because a check for predicate locks on a tuple target should also find
* locks on previous versions of the same row, if there are any created by
* overlapping transactions, we keep a pointer to the target for the prior
* version of the row. We also keep a pointer to the next version of the
* row, so that when we no longer have any predicate locks and the back
* pointer is clear, we can clean up the prior pointer for the next version.
*/
typedef struct PREDICATELOCKTARGET PREDICATELOCKTARGET;
struct PREDICATELOCKTARGET
{
/* hash key */
PREDICATELOCKTARGETTAG tag; /* unique identifier of lockable object */
/* data */
SHM_QUEUE predicateLocks; /* list of PREDICATELOCK objects assoc. with
* predicate lock target */
/*
* The following two pointers are only used for tuple locks, and are only
* consulted for conflict detection and cleanup; not for granularity
* promotion.
*/
PREDICATELOCKTARGET *priorVersionOfRow; /* what other locks to check */
PREDICATELOCKTARGET *nextVersionOfRow; /* who has pointer here for
* more targets */
};
/*
* The PREDICATELOCKTAG struct identifies an individual predicate lock.
*
* It is the combination of predicate lock target (which is a lockable
* object) and a serializable transaction which has acquired a lock on that
* target.
*/
typedef struct PREDICATELOCKTAG
{
PREDICATELOCKTARGET *myTarget;
SERIALIZABLEXACT *myXact;
} PREDICATELOCKTAG;
/*
* The PREDICATELOCK struct represents an individual lock.
*
* An entry can be created here when the related database object is read, or
* by promotion of multiple finer-grained targets. All entries related to a
* serializable transaction are removed when that serializable transaction is
* cleaned up. Entries can also be removed when they are combined into a
* single coarser-grained lock entry.
*/
typedef struct PREDICATELOCK
{
/* hash key */
PREDICATELOCKTAG tag; /* unique identifier of lock */
/* data */
SHM_QUEUE targetLink; /* list link in PREDICATELOCKTARGET's list of
* predicate locks */
SHM_QUEUE xactLink; /* list link in SERIALIZABLEXACT's list of
* predicate locks */
SerCommitSeqNo commitSeqNo; /* only used for summarized predicate locks */
} PREDICATELOCK;
/*
* The LOCALPREDICATELOCK struct represents a local copy of data which is
* also present in the PREDICATELOCK table, organized for fast access without
* needing to acquire a LWLock. It is strictly for optimization.
*
* Each serializable transaction creates its own local hash table to hold a
* collection of these. This information is used to determine when a number
* of fine-grained locks should be promoted to a single coarser-grained lock.
* The information is maintained more-or-less in parallel to the
* PREDICATELOCK data, but because this data is not protected by locks and is
* only used in an optimization heuristic, it is allowed to drift in a few
* corner cases where maintaining exact data would be expensive.
*
* The hash table is created when the serializable transaction acquires its
* snapshot, and its memory is released upon completion of the transaction.
*/
typedef struct LOCALPREDICATELOCK
{
/* hash key */
PREDICATELOCKTARGETTAG tag; /* unique identifier of lockable object */
/* data */
bool held; /* is lock held, or just its children? */
int childLocks; /* number of child locks currently held */
} LOCALPREDICATELOCK;
/*
* The types of predicate locks which can be acquired.
*/
typedef enum PredicateLockTargetType
{
PREDLOCKTAG_RELATION,
PREDLOCKTAG_PAGE,
PREDLOCKTAG_TUPLE
/* TODO SSI: Other types may be needed for index locking */
} PredicateLockTargetType;
/*
* This structure is used to quickly capture a copy of all predicate
* locks. This is currently used only by the pg_lock_status function,
* which in turn is used by the pg_locks view.
*/
typedef struct PredicateLockData
{
int nelements;
PREDICATELOCKTARGETTAG *locktags;
SERIALIZABLEXACT *xacts;
} PredicateLockData;
/*
* These macros define how we map logical IDs of lockable objects into the
* physical fields of PREDICATELOCKTARGETTAG. Use these to set up values,
* rather than accessing the fields directly. Note multiple eval of target!
*/
#define SET_PREDICATELOCKTARGETTAG_RELATION(locktag,dboid,reloid) \
((locktag).locktag_field1 = (dboid), \
(locktag).locktag_field2 = (reloid), \
(locktag).locktag_field3 = InvalidBlockNumber, \
(locktag).locktag_field4 = InvalidOffsetNumber, \
(locktag).locktag_field5 = 0)
#define SET_PREDICATELOCKTARGETTAG_PAGE(locktag,dboid,reloid,blocknum) \
((locktag).locktag_field1 = (dboid), \
(locktag).locktag_field2 = (reloid), \
(locktag).locktag_field3 = (blocknum), \
(locktag).locktag_field4 = InvalidOffsetNumber, \
(locktag).locktag_field5 = 0)
#define SET_PREDICATELOCKTARGETTAG_TUPLE(locktag,dboid,reloid,blocknum,offnum) \
((locktag).locktag_field1 = (dboid), \
(locktag).locktag_field2 = (reloid), \
(locktag).locktag_field3 = (blocknum), \
(locktag).locktag_field4 = (offnum), \
(locktag).locktag_field5 = 0)
#define GET_PREDICATELOCKTARGETTAG_DB(locktag) \
((locktag).locktag_field1)
#define GET_PREDICATELOCKTARGETTAG_RELATION(locktag) \
((locktag).locktag_field2)
#define GET_PREDICATELOCKTARGETTAG_PAGE(locktag) \
((locktag).locktag_field3)
#define GET_PREDICATELOCKTARGETTAG_OFFSET(locktag) \
((locktag).locktag_field4)
#define GET_PREDICATELOCKTARGETTAG_TYPE(locktag) \
(((locktag).locktag_field4 != InvalidOffsetNumber) ? PREDLOCKTAG_TUPLE : \
(((locktag).locktag_field3 != InvalidBlockNumber) ? PREDLOCKTAG_PAGE : \
PREDLOCKTAG_RELATION))
/*
* Two-phase commit statefile records. There are two types: for each
* transaction, we generate one per-transaction record and a variable
* number of per-predicate-lock records.
*/
typedef enum TwoPhasePredicateRecordType
{
TWOPHASEPREDICATERECORD_XACT,
TWOPHASEPREDICATERECORD_LOCK
} TwoPhasePredicateRecordType;
/*
* Per-transaction information to reconstruct a SERIALIZABLEXACT. Not
* much is needed because most of it not meaningful for a recovered
* prepared transaction.
*
* In particular, we do not record the in and out conflict lists for a
* prepared transaction because the associated SERIALIZABLEXACTs will
* not be available after recovery. Instead, we simply record the
* existence of each type of conflict by setting the transaction's
* summary conflict in/out flag.
*/
typedef struct TwoPhasePredicateXactRecord
{
TransactionId xmin;
uint32 flags;
} TwoPhasePredicateXactRecord;
/* Per-lock state */
typedef struct TwoPhasePredicateLockRecord
{
PREDICATELOCKTARGETTAG target;
} TwoPhasePredicateLockRecord;
typedef struct TwoPhasePredicateRecord
{
TwoPhasePredicateRecordType type;
union
{
TwoPhasePredicateXactRecord xactRecord;
TwoPhasePredicateLockRecord lockRecord;
} data;
} TwoPhasePredicateRecord;
/*
* Define a macro to use for an "empty" SERIALIZABLEXACT reference.
*/
#define InvalidSerializableXact ((SERIALIZABLEXACT *) NULL)
/*
* Function definitions for functions needing awareness of predicate
* locking internals.
*/
extern PredicateLockData *GetPredicateLockStatusData(void);
#endif /* PREDICATE_INTERNALS_H */

View File

@ -35,7 +35,7 @@ typedef struct SHM_QUEUE
extern void InitShmemAccess(void *seghdr);
extern void InitShmemAllocation(void);
extern void *ShmemAlloc(Size size);
extern bool ShmemAddrIsValid(void *addr);
extern bool ShmemAddrIsValid(const void *addr);
extern void InitShmemIndex(void);
extern HTAB *ShmemInitHash(const char *name, long init_size, long max_size,
HASHCTL *infoP, int hash_flags);
@ -67,8 +67,9 @@ extern void SHMQueueInit(SHM_QUEUE *queue);
extern void SHMQueueElemInit(SHM_QUEUE *queue);
extern void SHMQueueDelete(SHM_QUEUE *queue);
extern void SHMQueueInsertBefore(SHM_QUEUE *queue, SHM_QUEUE *elem);
extern Pointer SHMQueueNext(SHM_QUEUE *queue, SHM_QUEUE *curElem,
extern Pointer SHMQueueNext(const SHM_QUEUE *queue, const SHM_QUEUE *curElem,
Size linkOffset);
extern bool SHMQueueEmpty(SHM_QUEUE *queue);
extern bool SHMQueueEmpty(const SHM_QUEUE *queue);
extern bool SHMQueueIsDetached(const SHM_QUEUE *queue);
#endif /* SHMEM_H */

12
src/test/isolation/.gitignore vendored Normal file
View File

@ -0,0 +1,12 @@
# Local binaries
/isolationtester
/pg_isolation_regress
# Local generated source files
/specparse.c
/specscanner.c
# Generated subdirectories
/results/
/log/
/tmp_check/

View File

@ -0,0 +1,74 @@
#
# Makefile for isolation tests
#
subdir = src/test/isolation
top_builddir = ../../..
include $(top_builddir)/src/Makefile.global
ifeq ($(PORTNAME), win32)
LDLIBS += -lws2_32
endif
override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
override LDLIBS := $(libpq_pgport) $(LDLIBS)
OBJS = specparse.o isolationtester.o
submake-regress:
$(MAKE) -C $(top_builddir)/src/test/regress pg_regress.o
pg_regress.o: | submake-regress
rm -f $@ && $(LN_S) $(top_builddir)/src/test/regress/pg_regress.o .
pg_isolation_regress: isolation_main.o pg_regress.o
$(CC) $(CFLAGS) $^ $(libpq_pgport) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
all: isolationtester pg_isolation_regress
isolationtester: $(OBJS) | submake-libpq submake-libpgport
$(CC) $(CFLAGS) $(OBJS) $(libpq_pgport) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
distprep: specparse.c
# There is no correct way to write a rule that generates two files.
# Rules with two targets don't have that meaning, they are merely
# shorthand for two otherwise separate rules. To be safe for parallel
# make, we must chain the dependencies like this. The semicolon is
# important, otherwise make will choose the built-in rule for
# gram.y=>gram.c.
all: isolationtester$(X) pg_isolation_regress$(X)
specparse.h: specparse.c ;
# specscanner is compiled as part of specparse
specparse.o: specscanner.c
specparse.c: specparse.y
ifdef BISON
$(BISON) $(BISONFLAGS) -o $@ $<
else
@$(missing) bison $< $@
endif
specscanner.c: specscanner.l
ifdef FLEX
$(FLEX) $(FLEXFLAGS) -o'$@' $<
else
@$(missing) flex $< $@
endif
# specparse.c is in the distribution tarball, so is not cleaned here
clean distclean:
rm -f isolationtester$(X) pg_isolation_regress$(X) $(OBJS) isolation_main.o
rm -f pg_regress.o
rm -rf results
maintainer-clean: distclean
rm -f specparse.c specscanner.c
installcheck: all
./pg_isolation_regress --schedule=$(srcdir)/isolation_schedule
check: all
./pg_isolation_regress --temp-install=./tmp_check --top-builddir=$(top_builddir) --schedule=$(srcdir)/isolation_schedule

65
src/test/isolation/README Normal file
View File

@ -0,0 +1,65 @@
src/test/isolation/README
Isolation tests
===============
This directory contains a set of tests for the serializable isolation level.
Testing isolation requires running multiple overlapping transactions, so
which requires multiple concurrent connections, and can't therefore be
tested using the normal pg_regress program.
To represent a test with overlapping transactions, we use a test specification
file with a custom syntax, described in the next section.
isolationtester is program that uses libpq to open multiple connections,
and executes a test specified by a spec file. A libpq connection string
to specify the server and database to connect to, the defaults derived from
environment variables are used otherwise.
pg_isolation_regress is a tool identical to pg_regress, but instead of using
psql to execute a test, it uses isolationtester.
To run the tests, you need to have a server up and running. Run
gmake installcheck
Test specification
==================
Each isolation test is defined by a specification file, stored in the specs
subdirectory. A test specification consists of five parts, in this order:
setup { <SQL> }
The given SQL block is executed once, in one session only, before running
the test. Create any test tables or such objects here. This part is
optional.
teardown { <SQL> }
The teardown SQL block is executed once after the test is finished. Use
this to clean up, e.g dropping any test tables. This part is optional.
session "<name>"
Each session is executed in a separate connection. A session consists
of four parts: setup, teardown and one or more steps. The per-session
setup and teardown parts have the same syntax as the per-test setup and
teardown described above, but they are executed in every session,
before and after each permutation. The setup part typically contains a
"BEGIN" command to begin a transaction.
Each step has a syntax of
step "<name>" { <SQL> }
where <name> is a unique name identifying this step, and SQL is a SQL
statement (or statements, separated by semicolons) that is executed in the
step.
permutation "<step name>" ...
A permutation line specifies a list of steps that are ran in that order.
If no permutation lines are given, the test program automatically generates
all possible overlapping orderings of the given sessions.
Lines beginning with a # are considered comments.

View File

@ -0,0 +1,299 @@
Parsed test spec with 2 sessions
starting permutation: rx1 wy1 c1 ry2 wx2 c2
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step c1: COMMIT;
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
1
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step c2: COMMIT;
starting permutation: rx1 wy1 ry2 c1 wx2 c2
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step c1: COMMIT;
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx1 wy1 ry2 wx2 c1 c2
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 wy1 ry2 wx2 c2 c1
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wy1 c1 wx2 c2
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step c1: COMMIT;
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx1 ry2 wy1 wx2 c1 c2
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wy1 wx2 c2 c1
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wx2 wy1 c1 c2
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wx2 wy1 c2 c1
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wx2 c2 wy1 c1
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step c2: COMMIT;
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: ry2 rx1 wy1 c1 wx2 c2
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step c1: COMMIT;
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: ry2 rx1 wy1 wx2 c1 c2
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 rx1 wy1 wx2 c2 c1
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 rx1 wx2 wy1 c1 c2
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 rx1 wx2 wy1 c2 c1
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 rx1 wx2 c2 wy1 c1
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step c2: COMMIT;
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: ry2 wx2 rx1 wy1 c1 c2
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 wx2 rx1 wy1 c2 c1
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 wx2 rx1 c2 wy1 c1
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
0
step c2: COMMIT;
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: ry2 wx2 c2 rx1 wy1 c1
step ry2: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30';
count
0
step wx2: UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00';
step c2: COMMIT;
step rx1: SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00';
count
1
step wy1: INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol');
step c1: COMMIT;

View File

@ -0,0 +1,24 @@
Parsed test spec with 4 sessions
starting permutation: rx1 wx2 c2 wx3 ry3 wy4 rz4 c4 c3 wz1 c1
step rx1: SELECT * FROM t WHERE id = 1000000;
id txt
1000000
step wx2: UPDATE t SET txt = 'b' WHERE id = 1000000;
step c2: COMMIT;
step wx3: UPDATE t SET txt = 'c' WHERE id = 1000000;
step ry3: SELECT * FROM t WHERE id = 500000;
id txt
500000
step wy4: UPDATE t SET txt = 'd' WHERE id = 500000;
step rz4: SELECT * FROM t WHERE id = 1;
id txt
1
step c4: COMMIT;
step c3: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
step wz1: UPDATE t SET txt = 'a' WHERE id = 1;
step c1: COMMIT;

View File

@ -0,0 +1,641 @@
Parsed test spec with 2 sessions
starting permutation: rxy1 wx1 c1 wy2 rxy2 c2
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step c1: COMMIT;
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
step c2: COMMIT;
starting permutation: rxy1 wx1 wy2 c1 rxy2 c2
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step c1: COMMIT;
step rxy2: select * from test_t where val2 = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rxy1 wx1 wy2 rxy2 c1 c2
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rxy1 wx1 wy2 rxy2 c2 c1
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rxy1 wy2 wx1 c1 rxy2 c2
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step c1: COMMIT;
step rxy2: select * from test_t where val2 = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rxy1 wy2 wx1 rxy2 c1 c2
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rxy1 wy2 wx1 rxy2 c2 c1
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rxy1 wy2 rxy2 wx1 c1 c2
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rxy1 wy2 rxy2 wx1 c2 c1
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rxy1 wy2 rxy2 c2 wx1 c1
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step c2: COMMIT;
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: wy2 rxy1 wx1 c1 rxy2 c2
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step c1: COMMIT;
step rxy2: select * from test_t where val2 = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: wy2 rxy1 wx1 rxy2 c1 c2
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wy2 rxy1 wx1 rxy2 c2 c1
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wy2 rxy1 rxy2 wx1 c1 c2
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wy2 rxy1 rxy2 wx1 c2 c1
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wy2 rxy1 rxy2 c2 wx1 c1
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step c2: COMMIT;
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: wy2 rxy2 rxy1 wx1 c1 c2
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wy2 rxy2 rxy1 wx1 c2 c1
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wy2 rxy2 rxy1 c2 wx1 c1
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
9 a 1
10 a 1
step c2: COMMIT;
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: wy2 rxy2 c2 rxy1 wx1 c1
step wy2: update test_t set val2 = 2 where val2 = 1 and id = 9;
step rxy2: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step c2: COMMIT;
step rxy1: select * from test_t where val2 = 1;
id val1 val2
0 a 1
1 a 1
2 a 1
3 a 1
4 a 1
5 a 1
6 a 1
7 a 1
8 a 1
10 a 1
step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
step c1: COMMIT;

View File

@ -0,0 +1,299 @@
Parsed test spec with 2 sessions
starting permutation: rx1 wy1 c1 ry2 wx2 c2
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step c1: COMMIT;
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
1
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step c2: COMMIT;
starting permutation: rx1 wy1 ry2 c1 wx2 c2
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step c1: COMMIT;
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx1 wy1 ry2 wx2 c1 c2
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 wy1 ry2 wx2 c2 c1
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wy1 c1 wx2 c2
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step c1: COMMIT;
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx1 ry2 wy1 wx2 c1 c2
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wy1 wx2 c2 c1
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wx2 wy1 c1 c2
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wx2 wy1 c2 c1
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wx2 c2 wy1 c1
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step c2: COMMIT;
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: ry2 rx1 wy1 c1 wx2 c2
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step c1: COMMIT;
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: ry2 rx1 wy1 wx2 c1 c2
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 rx1 wy1 wx2 c2 c1
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 rx1 wx2 wy1 c1 c2
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 rx1 wx2 wy1 c2 c1
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 rx1 wx2 c2 wy1 c1
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step c2: COMMIT;
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: ry2 wx2 rx1 wy1 c1 c2
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 wx2 rx1 wy1 c2 c1
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 wx2 rx1 c2 wy1 c1
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
1
step c2: COMMIT;
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: ry2 wx2 c2 rx1 wy1 c1
step ry2: SELECT count(*) FROM project WHERE project_manager = 1;
count
0
step wx2: UPDATE person SET is_project_manager = false WHERE person_id = 1;
step c2: COMMIT;
step rx1: SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager;
count
0
step wy1: INSERT INTO project VALUES (101, 'Build Great Wall', 1);
step c1: COMMIT;

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,629 @@
Parsed test spec with 2 sessions
starting permutation: rx1 wy1 c1 rx2 ry2 wx2 c2
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step c1: COMMIT;
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
1
step wx2: DELETE FROM a WHERE i = 1;
step c2: COMMIT;
starting permutation: rx1 wy1 rx2 c1 ry2 wx2 c2
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step c1: COMMIT;
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx1 wy1 rx2 ry2 c1 wx2 c2
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step c1: COMMIT;
step wx2: DELETE FROM a WHERE i = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx1 wy1 rx2 ry2 wx2 c1 c2
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 wy1 rx2 ry2 wx2 c2 c1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 rx2 wy1 c1 ry2 wx2 c2
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step c1: COMMIT;
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx1 rx2 wy1 ry2 c1 wx2 c2
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step c1: COMMIT;
step wx2: DELETE FROM a WHERE i = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx1 rx2 wy1 ry2 wx2 c1 c2
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 rx2 wy1 ry2 wx2 c2 c1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 rx2 ry2 wy1 c1 wx2 c2
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wy1: INSERT INTO b VALUES (1);
step c1: COMMIT;
step wx2: DELETE FROM a WHERE i = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx1 rx2 ry2 wy1 wx2 c1 c2
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wy1: INSERT INTO b VALUES (1);
step wx2: DELETE FROM a WHERE i = 1;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 rx2 ry2 wy1 wx2 c2 c1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wy1: INSERT INTO b VALUES (1);
step wx2: DELETE FROM a WHERE i = 1;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 rx2 ry2 wx2 wy1 c1 c2
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step wy1: INSERT INTO b VALUES (1);
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 rx2 ry2 wx2 wy1 c2 c1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step wy1: INSERT INTO b VALUES (1);
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 rx2 ry2 wx2 c2 wy1 c1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step c2: COMMIT;
step wy1: INSERT INTO b VALUES (1);
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: rx2 rx1 wy1 c1 ry2 wx2 c2
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step c1: COMMIT;
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx2 rx1 wy1 ry2 c1 wx2 c2
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step c1: COMMIT;
step wx2: DELETE FROM a WHERE i = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx2 rx1 wy1 ry2 wx2 c1 c2
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx2 rx1 wy1 ry2 wx2 c2 c1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx2 rx1 ry2 wy1 c1 wx2 c2
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wy1: INSERT INTO b VALUES (1);
step c1: COMMIT;
step wx2: DELETE FROM a WHERE i = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx2 rx1 ry2 wy1 wx2 c1 c2
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wy1: INSERT INTO b VALUES (1);
step wx2: DELETE FROM a WHERE i = 1;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx2 rx1 ry2 wy1 wx2 c2 c1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wy1: INSERT INTO b VALUES (1);
step wx2: DELETE FROM a WHERE i = 1;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx2 rx1 ry2 wx2 wy1 c1 c2
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step wy1: INSERT INTO b VALUES (1);
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx2 rx1 ry2 wx2 wy1 c2 c1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step wy1: INSERT INTO b VALUES (1);
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx2 rx1 ry2 wx2 c2 wy1 c1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step c2: COMMIT;
step wy1: INSERT INTO b VALUES (1);
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: rx2 ry2 rx1 wy1 c1 wx2 c2
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step c1: COMMIT;
step wx2: DELETE FROM a WHERE i = 1;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx2 ry2 rx1 wy1 wx2 c1 c2
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step wx2: DELETE FROM a WHERE i = 1;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx2 ry2 rx1 wy1 wx2 c2 c1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step wx2: DELETE FROM a WHERE i = 1;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx2 ry2 rx1 wx2 wy1 c1 c2
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wx2: DELETE FROM a WHERE i = 1;
step wy1: INSERT INTO b VALUES (1);
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx2 ry2 rx1 wx2 wy1 c2 c1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wx2: DELETE FROM a WHERE i = 1;
step wy1: INSERT INTO b VALUES (1);
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx2 ry2 rx1 wx2 c2 wy1 c1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wx2: DELETE FROM a WHERE i = 1;
step c2: COMMIT;
step wy1: INSERT INTO b VALUES (1);
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: rx2 ry2 wx2 rx1 wy1 c1 c2
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx2 ry2 wx2 rx1 wy1 c2 c1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step wy1: INSERT INTO b VALUES (1);
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx2 ry2 wx2 rx1 c2 wy1 c1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step rx1: SELECT i FROM a WHERE i = 1;
i
1
step c2: COMMIT;
step wy1: INSERT INTO b VALUES (1);
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: rx2 ry2 wx2 c2 rx1 wy1 c1
step rx2: SELECT i FROM a WHERE i = 1;
i
1
step ry2: SELECT a_id FROM b WHERE a_id = 1;
a_id
step wx2: DELETE FROM a WHERE i = 1;
step c2: COMMIT;
step rx1: SELECT i FROM a WHERE i = 1;
i
step wy1: INSERT INTO b VALUES (1);
step c1: COMMIT;

View File

@ -0,0 +1,111 @@
Parsed test spec with 2 sessions
starting permutation: wxry1 c1 r2 wyrx2 c2
step wxry1: INSERT INTO child (parent_id) VALUES (0);
step c1: COMMIT;
step r2: SELECT TRUE;
bool
t
step wyrx2: DELETE FROM parent WHERE parent_id = 0;
ERROR: child row exists
step c2: COMMIT;
starting permutation: wxry1 r2 c1 wyrx2 c2
step wxry1: INSERT INTO child (parent_id) VALUES (0);
step r2: SELECT TRUE;
bool
t
step c1: COMMIT;
step wyrx2: DELETE FROM parent WHERE parent_id = 0;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: wxry1 r2 wyrx2 c1 c2
step wxry1: INSERT INTO child (parent_id) VALUES (0);
step r2: SELECT TRUE;
bool
t
step wyrx2: DELETE FROM parent WHERE parent_id = 0;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wxry1 r2 wyrx2 c2 c1
step wxry1: INSERT INTO child (parent_id) VALUES (0);
step r2: SELECT TRUE;
bool
t
step wyrx2: DELETE FROM parent WHERE parent_id = 0;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: r2 wxry1 c1 wyrx2 c2
step r2: SELECT TRUE;
bool
t
step wxry1: INSERT INTO child (parent_id) VALUES (0);
step c1: COMMIT;
step wyrx2: DELETE FROM parent WHERE parent_id = 0;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: r2 wxry1 wyrx2 c1 c2
step r2: SELECT TRUE;
bool
t
step wxry1: INSERT INTO child (parent_id) VALUES (0);
step wyrx2: DELETE FROM parent WHERE parent_id = 0;
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: r2 wxry1 wyrx2 c2 c1
step r2: SELECT TRUE;
bool
t
step wxry1: INSERT INTO child (parent_id) VALUES (0);
step wyrx2: DELETE FROM parent WHERE parent_id = 0;
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: r2 wyrx2 wxry1 c1 c2
step r2: SELECT TRUE;
bool
t
step wyrx2: DELETE FROM parent WHERE parent_id = 0;
step wxry1: INSERT INTO child (parent_id) VALUES (0);
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: r2 wyrx2 wxry1 c2 c1
step r2: SELECT TRUE;
bool
t
step wyrx2: DELETE FROM parent WHERE parent_id = 0;
step wxry1: INSERT INTO child (parent_id) VALUES (0);
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: r2 wyrx2 c2 wxry1 c1
step r2: SELECT TRUE;
bool
t
step wyrx2: DELETE FROM parent WHERE parent_id = 0;
step c2: COMMIT;
step wxry1: INSERT INTO child (parent_id) VALUES (0);
ERROR: parent row missing
step c1: COMMIT;

View File

@ -0,0 +1,41 @@
Parsed test spec with 2 sessions
starting permutation: rwx1 c1 rwx2 c2
step rwx1: UPDATE test SET t = 'apple' WHERE t = 'pear';
step c1: COMMIT;
step rwx2: UPDATE test SET t = 'pear' WHERE t = 'apple'
step c2: COMMIT;
starting permutation: rwx1 rwx2 c1 c2
step rwx1: UPDATE test SET t = 'apple' WHERE t = 'pear';
step rwx2: UPDATE test SET t = 'pear' WHERE t = 'apple'
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rwx1 rwx2 c2 c1
step rwx1: UPDATE test SET t = 'apple' WHERE t = 'pear';
step rwx2: UPDATE test SET t = 'pear' WHERE t = 'apple'
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rwx2 rwx1 c1 c2
step rwx2: UPDATE test SET t = 'pear' WHERE t = 'apple'
step rwx1: UPDATE test SET t = 'apple' WHERE t = 'pear';
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rwx2 rwx1 c2 c1
step rwx2: UPDATE test SET t = 'pear' WHERE t = 'apple'
step rwx1: UPDATE test SET t = 'apple' WHERE t = 'pear';
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rwx2 c2 rwx1 c1
step rwx2: UPDATE test SET t = 'pear' WHERE t = 'apple'
step c2: COMMIT;
step rwx1: UPDATE test SET t = 'apple' WHERE t = 'pear';
step c1: COMMIT;

View File

@ -0,0 +1,299 @@
Parsed test spec with 2 sessions
starting permutation: rx1 wy1 c1 ry2 wx2 c2
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step c1: COMMIT;
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
1
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step c2: COMMIT;
starting permutation: rx1 wy1 ry2 c1 wx2 c2
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step c1: COMMIT;
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx1 wy1 ry2 wx2 c1 c2
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 wy1 ry2 wx2 c2 c1
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wy1 c1 wx2 c2
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step c1: COMMIT;
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: rx1 ry2 wy1 wx2 c1 c2
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wy1 wx2 c2 c1
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wx2 wy1 c1 c2
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wx2 wy1 c2 c1
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: rx1 ry2 wx2 c2 wy1 c1
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step c2: COMMIT;
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: ry2 rx1 wy1 c1 wx2 c2
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step c1: COMMIT;
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: ry2 rx1 wy1 wx2 c1 c2
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 rx1 wy1 wx2 c2 c1
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 rx1 wx2 wy1 c1 c2
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 rx1 wx2 wy1 c2 c1
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 rx1 wx2 c2 wy1 c1
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step c2: COMMIT;
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: ry2 wx2 rx1 wy1 c1 c2
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 wx2 rx1 wy1 c2 c1
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: ry2 wx2 rx1 c2 wy1 c1
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
1
step c2: COMMIT;
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: ry2 wx2 c2 rx1 wy1 c1
step ry2: SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01';
count
0
step wx2: DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01';
step c2: COMMIT;
step rx1: SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15');
count
0
step wy1: INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15');
step c1: COMMIT;

View File

@ -0,0 +1,281 @@
Parsed test spec with 2 sessions
starting permutation: wx1 rxy1 c1 wy2 rxy2 c2
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step c1: COMMIT;
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step rxy2: SELECT SUM(balance) FROM accounts;
sum
800
step c2: COMMIT;
starting permutation: wx1 rxy1 wy2 c1 rxy2 c2
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step c1: COMMIT;
step rxy2: SELECT SUM(balance) FROM accounts;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: wx1 rxy1 wy2 rxy2 c1 c2
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wx1 rxy1 wy2 rxy2 c2 c1
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wx1 wy2 rxy1 c1 rxy2 c2
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step c1: COMMIT;
step rxy2: SELECT SUM(balance) FROM accounts;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: wx1 wy2 rxy1 rxy2 c1 c2
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wx1 wy2 rxy1 rxy2 c2 c1
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wx1 wy2 rxy2 rxy1 c1 c2
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wx1 wy2 rxy2 rxy1 c2 c1
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wx1 wy2 rxy2 c2 rxy1 c1
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step c2: COMMIT;
step rxy1: SELECT SUM(balance) FROM accounts;
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: wy2 wx1 rxy1 c1 rxy2 c2
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step c1: COMMIT;
step rxy2: SELECT SUM(balance) FROM accounts;
ERROR: could not serialize access due to read/write dependencies among transactions
step c2: COMMIT;
starting permutation: wy2 wx1 rxy1 rxy2 c1 c2
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wy2 wx1 rxy1 rxy2 c2 c1
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wy2 wx1 rxy2 rxy1 c1 c2
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wy2 wx1 rxy2 rxy1 c2 c1
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wy2 wx1 rxy2 c2 rxy1 c1
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step c2: COMMIT;
step rxy1: SELECT SUM(balance) FROM accounts;
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: wy2 rxy2 wx1 rxy1 c1 c2
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step c1: COMMIT;
step c2: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wy2 rxy2 wx1 rxy1 c2 c1
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step rxy1: SELECT SUM(balance) FROM accounts;
sum
1000
step c2: COMMIT;
step c1: COMMIT;
ERROR: could not serialize access due to read/write dependencies among transactions
starting permutation: wy2 rxy2 wx1 c2 rxy1 c1
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step c2: COMMIT;
step rxy1: SELECT SUM(balance) FROM accounts;
ERROR: could not serialize access due to read/write dependencies among transactions
step c1: COMMIT;
starting permutation: wy2 rxy2 c2 wx1 rxy1 c1
step wy2: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings';
step rxy2: SELECT SUM(balance) FROM accounts;
sum
1000
step c2: COMMIT;
step wx1: UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking';
step rxy1: SELECT SUM(balance) FROM accounts;
sum
800
step c1: COMMIT;

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,89 @@
/*-------------------------------------------------------------------------
*
* isolation_main --- pg_regress test launcher for isolation tests
*
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* src/test/isolation/isolation_main.c
*
*-------------------------------------------------------------------------
*/
#include "../regress/pg_regress.h"
/*
* start an isolation tester process for specified file (including
* redirection), and return process ID
*/
static PID_TYPE
isolation_start_test(const char *testname,
_stringlist ** resultfiles,
_stringlist ** expectfiles,
_stringlist ** tags)
{
PID_TYPE pid;
char infile[MAXPGPATH];
char outfile[MAXPGPATH];
char expectfile[MAXPGPATH];
char psql_cmd[MAXPGPATH * 3];
size_t offset = 0;
/*
* Look for files in the output dir first, consistent with a vpath search.
* This is mainly to create more reasonable error messages if the file is
* not found. It also allows local test overrides when running pg_regress
* outside of the source tree.
*/
snprintf(infile, sizeof(infile), "%s/specs/%s.spec",
outputdir, testname);
if (!file_exists(infile))
snprintf(infile, sizeof(infile), "%s/specs/%s.spec",
inputdir, testname);
snprintf(outfile, sizeof(outfile), "%s/results/%s.out",
outputdir, testname);
snprintf(expectfile, sizeof(expectfile), "%s/expected/%s.out",
outputdir, testname);
if (!file_exists(expectfile))
snprintf(expectfile, sizeof(expectfile), "%s/expected/%s.out",
inputdir, testname);
add_stringlist_item(resultfiles, outfile);
add_stringlist_item(expectfiles, expectfile);
if (launcher)
offset += snprintf(psql_cmd + offset, sizeof(psql_cmd) - offset,
"%s ", launcher);
snprintf(psql_cmd + offset, sizeof(psql_cmd) - offset,
SYSTEMQUOTE "./isolationtester \"dbname=%s\" < \"%s\" > \"%s\" 2>&1" SYSTEMQUOTE,
dblist->str,
infile,
outfile);
pid = spawn_process(psql_cmd);
if (pid == INVALID_PID)
{
fprintf(stderr, _("could not start process for test %s\n"),
testname);
exit_nicely(2);
}
return pid;
}
static void
isolation_init(void)
{
/* set default regression database name */
add_stringlist_item(&dblist, "isolationtest");
}
int
main(int argc, char *argv[])
{
return regression_main(argc, argv, isolation_init, isolation_start_test);
}

View File

@ -0,0 +1,11 @@
test: simple-write-skew
test: receipt-report
test: temporal-range-integrity
test: project-manager
test: classroom-scheduling
test: total-cash
test: referential-integrity
test: ri-trigger
test: partial-index
test: two-ids
test: multiple-row-versions

View File

@ -0,0 +1,372 @@
/*
* src/test/isolation/isolationtester.c
*
* isolationtester.c
* Runs an isolation test specified by a spec file.
*/
#ifdef WIN32
#include <windows.h>
#endif
#include <stddef.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "libpq-fe.h"
#include "isolationtester.h"
static PGconn **conns = NULL;
static int nconns = 0;
static void run_all_permutations(TestSpec *testspec);
static void run_all_permutations_recurse(TestSpec *testspec, int nsteps, Step **steps);
static void run_named_permutations(TestSpec *testspec);
static void run_permutation(TestSpec *testspec, int nsteps, Step **steps);
static int step_qsort_cmp(const void *a, const void *b);
static int step_bsearch_cmp(const void *a, const void *b);
static void printResultSet(PGresult *res);
/* close all connections and exit */
static void
exit_nicely(void)
{
int i;
for (i = 0; i < nconns; i++)
PQfinish(conns[i]);
exit(1);
}
int
main(int argc, char **argv)
{
const char *conninfo;
TestSpec *testspec;
int i;
/*
* If the user supplies a parameter on the command line, use it as the
* conninfo string; otherwise default to setting dbname=postgres and
* using environment variables or defaults for all other connection
* parameters.
*/
if (argc > 1)
conninfo = argv[1];
else
conninfo = "dbname = postgres";
/* Read the test spec from stdin */
spec_yyparse();
testspec = &parseresult;
printf("Parsed test spec with %d sessions\n", testspec->nsessions);
/* Establish connections to the database, one for each session */
nconns = testspec->nsessions;
conns = calloc(nconns, sizeof(PGconn *));
for (i = 0; i < testspec->nsessions; i++)
{
PGresult *res;
conns[i] = PQconnectdb(conninfo);
if (PQstatus(conns[i]) != CONNECTION_OK)
{
fprintf(stderr, "Connection %d to database failed: %s",
i, PQerrorMessage(conns[i]));
exit_nicely();
}
/*
* Suppress NOTIFY messages, which otherwise pop into results at odd
* places.
*/
res = PQexec(conns[i], "SET client_min_messages = warning;");
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, "message level setup failed: %s", PQerrorMessage(conns[i]));
exit_nicely();
}
PQclear(res);
}
/* Set the session index fields in steps. */
for (i = 0; i < testspec->nsessions; i++)
{
Session *session = testspec->sessions[i];
int stepindex;
for (stepindex = 0; stepindex < session->nsteps; stepindex++)
session->steps[stepindex]->session = i;
}
/*
* Run the permutations specified in the spec, or all if none were
* explicitly specified.
*/
if (testspec->permutations)
run_named_permutations(testspec);
else
run_all_permutations(testspec);
/* Clean up and exit */
for (i = 0; i < nconns; i++)
PQfinish(conns[i]);
return 0;
}
static int *piles;
/*
* Run all permutations of the steps and sessions.
*/
static void
run_all_permutations(TestSpec *testspec)
{
int nsteps;
int i;
Step **steps;
/* Count the total number of steps in all sessions */
nsteps = 0;
for (i = 0; i < testspec->nsessions; i++)
nsteps += testspec->sessions[i]->nsteps;
steps = malloc(sizeof(Step *) * nsteps);
/*
* To generate the permutations, we conceptually put the steps of
* each session on a pile. To generate a permuation, we pick steps
* from the piles until all piles are empty. By picking steps from
* piles in different order, we get different permutations.
*
* A pile is actually just an integer which tells how many steps
* we've already picked from this pile.
*/
piles = malloc(sizeof(int) * testspec->nsessions);
for (i = 0; i < testspec->nsessions; i++)
piles[i] = 0;
run_all_permutations_recurse(testspec, 0, steps);
}
static void
run_all_permutations_recurse(TestSpec *testspec, int nsteps, Step **steps)
{
int i;
int found = 0;
for (i = 0; i < testspec->nsessions; i++)
{
/* If there's any more steps in this pile, pick it and recurse */
if (piles[i] < testspec->sessions[i]->nsteps)
{
steps[nsteps] = testspec->sessions[i]->steps[piles[i]];
piles[i]++;
run_all_permutations_recurse(testspec, nsteps + 1, steps);
piles[i]--;
found = 1;
}
}
/* If all the piles were empty, this permutation is completed. Run it */
if (!found)
run_permutation(testspec, nsteps, steps);
}
/*
* Run permutations given in the test spec
*/
static void
run_named_permutations(TestSpec *testspec)
{
int i, j;
int n;
int nallsteps;
Step **allsteps;
/* First create a lookup table of all steps */
nallsteps = 0;
for (i = 0; i < testspec->nsessions; i++)
nallsteps += testspec->sessions[i]->nsteps;
allsteps = malloc(nallsteps * sizeof(Step *));
n = 0;
for (i = 0; i < testspec->nsessions; i++)
{
for (j = 0; j < testspec->sessions[i]->nsteps; j++)
allsteps[n++] = testspec->sessions[i]->steps[j];
}
qsort(allsteps, nallsteps, sizeof(Step *), &step_qsort_cmp);
for (i = 0; i < testspec->npermutations; i++)
{
Permutation *p = testspec->permutations[i];
Step **steps;
steps = malloc(p->nsteps * sizeof(Step *));
/* Find all the named steps from the lookup table */
for (j = 0; j < p->nsteps; j++)
{
steps[j] = *((Step **) bsearch(p->stepnames[j], allsteps, nallsteps,
sizeof(Step *), &step_bsearch_cmp));
if (steps[j] == NULL)
{
fprintf(stderr, "undefined step \"%s\" specified in permutation\n", p->stepnames[j]);
exit_nicely();
}
}
run_permutation(testspec, p->nsteps, steps);
free(steps);
}
}
static int
step_qsort_cmp(const void *a, const void *b)
{
Step *stepa = *((Step **) a);
Step *stepb = *((Step **) b);
return strcmp(stepa->name, stepb->name);
}
static int
step_bsearch_cmp(const void *a, const void *b)
{
char *stepname = (char *) a;
Step *step = *((Step **) b);
return strcmp(stepname, step->name);
}
/*
* Run one permutation
*/
static void
run_permutation(TestSpec *testspec, int nsteps, Step **steps)
{
PGresult *res;
int i;
printf("\nstarting permutation:");
for (i = 0; i < nsteps; i++)
printf(" %s", steps[i]->name);
printf("\n");
/* Perform setup */
if (testspec->setupsql)
{
res = PQexec(conns[0], testspec->setupsql);
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, "setup failed: %s", PQerrorMessage(conns[0]));
exit_nicely();
}
PQclear(res);
}
/* Perform per-session setup */
for (i = 0; i < testspec->nsessions; i++)
{
if (testspec->sessions[i]->setupsql)
{
res = PQexec(conns[i], testspec->sessions[i]->setupsql);
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, "setup of session %s failed: %s",
testspec->sessions[i]->name,
PQerrorMessage(conns[0]));
exit_nicely();
}
PQclear(res);
}
}
/* Perform steps */
for (i = 0; i < nsteps; i++)
{
Step *step = steps[i];
printf("step %s: %s\n", step->name, step->sql);
res = PQexec(conns[step->session], step->sql);
switch(PQresultStatus(res))
{
case PGRES_COMMAND_OK:
break;
case PGRES_TUPLES_OK:
printResultSet(res);
break;
case PGRES_FATAL_ERROR:
/* Detail may contain xid values, so just show primary. */
printf("%s: %s\n", PQresultErrorField(res, PG_DIAG_SEVERITY),
PQresultErrorField(res, PG_DIAG_MESSAGE_PRIMARY));
break;
default:
printf("unexpected result status: %s\n",
PQresStatus(PQresultStatus(res)));
}
PQclear(res);
}
/* Perform per-session teardown */
for (i = 0; i < testspec->nsessions; i++)
{
if (testspec->sessions[i]->teardownsql)
{
res = PQexec(conns[i], testspec->sessions[i]->teardownsql);
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, "teardown of session %s failed: %s",
testspec->sessions[i]->name,
PQerrorMessage(conns[0]));
/* don't exit on teardown failure */
}
PQclear(res);
}
}
/* Perform teardown */
if (testspec->teardownsql)
{
res = PQexec(conns[0], testspec->teardownsql);
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, "teardown failed: %s",
PQerrorMessage(conns[0]));
/* don't exit on teardown failure */
}
PQclear(res);
}
}
static void
printResultSet(PGresult *res)
{
int nFields;
int i, j;
/* first, print out the attribute names */
nFields = PQnfields(res);
for (i = 0; i < nFields; i++)
printf("%-15s", PQfname(res, i));
printf("\n\n");
/* next, print out the rows */
for (i = 0; i < PQntuples(res); i++)
{
for (j = 0; j < nFields; j++)
printf("%-15s", PQgetvalue(res, i, j));
printf("\n");
}
}

View File

@ -0,0 +1,59 @@
/*-------------------------------------------------------------------------
*
* bootstrap.h
* include file for the bootstrapping code
*
*
* Portions Copyright (c) 1996-2006, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* $PostgreSQL: pgsql/src/include/bootstrap/bootstrap.h,v 1.44 2006/10/04 00:30:07 momjian Exp $
*
*-------------------------------------------------------------------------
*/
#ifndef ISOLATIONTESTER_H
#define ISOLATIONTESTER_H
typedef struct Session Session;
typedef struct Step Step;
struct Session
{
char *name;
char *setupsql;
char *teardownsql;
Step **steps;
int nsteps;
};
struct Step
{
int session;
char *name;
char *sql;
};
typedef struct
{
int nsteps;
char **stepnames;
} Permutation;
typedef struct
{
char *setupsql;
char *teardownsql;
Session **sessions;
int nsessions;
Permutation **permutations;
int npermutations;
} TestSpec;
extern TestSpec parseresult;
extern int spec_yyparse(void);
extern int spec_yylex(void);
extern void spec_yyerror(const char *str);
#endif /* ISOLATIONTESTER_H */

View File

@ -0,0 +1,188 @@
%{
/*-------------------------------------------------------------------------
*
* specparse.y
* bison grammar for the isolation test file format
*
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
*-------------------------------------------------------------------------
*/
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include "isolationtester.h"
TestSpec parseresult; /* result of parsing is left here */
%}
%expect 0
%name-prefix="spec_yy"
%union
{
char *str;
Session *session;
Step *step;
Permutation *permutation;
struct
{
void **elements;
int nelements;
} ptr_list;
}
%type <str> opt_setup opt_teardown
%type <ptr_list> step_list session_list permutation_list opt_permutation_list
%type <ptr_list> string_list
%type <session> session
%type <step> step
%type <permutation> permutation
%token <str> sqlblock string
%token PERMUTATION SESSION SETUP STEP TEARDOWN TEST
%%
TestSpec:
opt_setup
opt_teardown
session_list
opt_permutation_list
{
parseresult.setupsql = $1;
parseresult.teardownsql = $2;
parseresult.sessions = (Session **) $3.elements;
parseresult.nsessions = $3.nelements;
parseresult.permutations = (Permutation **) $4.elements;
parseresult.npermutations = $4.nelements;
}
;
opt_setup:
/* EMPTY */ { $$ = NULL; }
| SETUP sqlblock { $$ = $2; }
;
opt_teardown:
/* EMPTY */ { $$ = NULL; }
| TEARDOWN sqlblock { $$ = $2; }
;
session_list:
session_list session
{
$$.elements = realloc($1.elements,
($1.nelements + 1) * sizeof(void *));
$$.elements[$1.nelements] = $2;
$$.nelements = $1.nelements + 1;
}
| session
{
$$.nelements = 1;
$$.elements = malloc(sizeof(void *));
$$.elements[0] = $1;
}
;
session:
SESSION string opt_setup step_list opt_teardown
{
$$ = malloc(sizeof(Session));
$$->name = $2;
$$->setupsql = $3;
$$->steps = (Step **) $4.elements;
$$->nsteps = $4.nelements;
$$->teardownsql = $5;
}
;
step_list:
step_list step
{
$$.elements = realloc($1.elements,
($1.nelements + 1) * sizeof(void *));
$$.elements[$1.nelements] = $2;
$$.nelements = $1.nelements + 1;
}
| step
{
$$.nelements = 1;
$$.elements = malloc(sizeof(void *));
$$.elements[0] = $1;
}
;
step:
STEP string sqlblock
{
$$ = malloc(sizeof(Step));
$$->name = $2;
$$->sql = $3;
}
;
opt_permutation_list:
permutation_list
{
$$ = $1;
}
| /* EMPTY */
{
$$.elements = NULL;
$$.nelements = 0;
}
permutation_list:
permutation_list permutation
{
$$.elements = realloc($1.elements,
($1.nelements + 1) * sizeof(void *));
$$.elements[$1.nelements] = $2;
$$.nelements = $1.nelements + 1;
}
| permutation
{
$$.nelements = 1;
$$.elements = malloc(sizeof(void *));
$$.elements[0] = $1;
}
;
permutation:
PERMUTATION string_list
{
$$ = malloc(sizeof(Permutation));
$$->stepnames = (char **) $2.elements;
$$->nsteps = $2.nelements;
}
;
string_list:
string_list string
{
$$.elements = realloc($1.elements,
($1.nelements + 1) * sizeof(void *));
$$.elements[$1.nelements] = $2;
$$.nelements = $1.nelements + 1;
}
| string
{
$$.nelements = 1;
$$.elements = malloc(sizeof(void *));
$$.elements[0] = $1;
}
;
%%
#include "specscanner.c"

View File

@ -0,0 +1,29 @@
# Classroom Scheduling test
#
# Ensure that the classroom is not scheduled more than once
# for any moment in time.
#
# Any overlap between the transactions must cause a serialization failure.
setup
{
CREATE TABLE room_reservation (room_id text NOT NULL, start_time timestamp with time zone NOT NULL, end_time timestamp with time zone NOT NULL, description text NOT NULL, CONSTRAINT room_reservation_pkey PRIMARY KEY (room_id, start_time));
INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 10:00', TIMESTAMP WITH TIME ZONE '2010-04-01 11:00', 'Bob');
}
teardown
{
DROP TABLE room_reservation;
}
session "s1"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "rx1" { SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:00' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:00'; }
step "wy1" { INSERT INTO room_reservation VALUES ('101', TIMESTAMP WITH TIME ZONE '2010-04-01 13:00', TIMESTAMP WITH TIME ZONE '2010-04-01 14:00', 'Carol'); }
step "c1" { COMMIT; }
session "s2"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "ry2" { SELECT count(*) FROM room_reservation WHERE room_id = '101' AND start_time < TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' AND end_time > TIMESTAMP WITH TIME ZONE '2010-04-01 13:30'; }
step "wx2" { UPDATE room_reservation SET start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 13:30', end_time = TIMESTAMP WITH TIME ZONE '2010-04-01 14:30' WHERE room_id = '101' AND start_time = TIMESTAMP WITH TIME ZONE '2010-04-01 10:00'; }
step "c2" { COMMIT; }

View File

@ -0,0 +1,48 @@
# Multiple Row Versions test
#
# This test is designed to ensure that predicate locks taken on one version
# of a row are detected as conflicts when a later version of the row is
# updated or deleted by a transaction concurrent to the reader.
#
# Due to long permutation setup time, we are only testing one specific
# permutation, which should get a serialization error.
setup
{
CREATE TABLE t (id int NOT NULL, txt text) WITH (fillfactor=50);
INSERT INTO t (id)
SELECT x FROM (SELECT * FROM generate_series(1, 1000000)) a(x);
ALTER TABLE t ADD PRIMARY KEY (id);
}
teardown
{
DROP TABLE t;
}
session "s1"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "rx1" { SELECT * FROM t WHERE id = 1000000; }
# delay until after T3 commits
step "wz1" { UPDATE t SET txt = 'a' WHERE id = 1; }
step "c1" { COMMIT; }
session "s2"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "wx2" { UPDATE t SET txt = 'b' WHERE id = 1000000; }
step "c2" { COMMIT; }
session "s3"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "wx3" { UPDATE t SET txt = 'c' WHERE id = 1000000; }
step "ry3" { SELECT * FROM t WHERE id = 500000; }
# delay until after T4 commits
step "c3" { COMMIT; }
session "s4"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "wy4" { UPDATE t SET txt = 'd' WHERE id = 500000; }
step "rz4" { SELECT * FROM t WHERE id = 1; }
step "c4" { COMMIT; }
permutation "rx1" "wx2" "c2" "wx3" "ry3" "wy4" "rz4" "c4" "c3" "wz1" "c1"

View File

@ -0,0 +1,32 @@
# Partial Index test
#
# Make sure that an update which moves a row out of a partial index
# is handled correctly. In early versions, an attempt at optimization
# broke this behavior, allowing anomalies.
#
# Any overlap between the transactions must cause a serialization failure.
setup
{
create table test_t (id integer, val1 text, val2 integer);
create index test_idx on test_t(id) where val2 = 1;
insert into test_t (select generate_series(0, 10000), 'a', 2);
insert into test_t (select generate_series(0, 10), 'a', 1);
}
teardown
{
DROP TABLE test_t;
}
session "s1"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "rxy1" { select * from test_t where val2 = 1; }
step "wx1" { update test_t set val2 = 2 where val2 = 1 and id = 10; }
step "c1" { COMMIT; }
session "s2"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "wy2" { update test_t set val2 = 2 where val2 = 1 and id = 9; }
step "rxy2" { select * from test_t where val2 = 1; }
step "c2" { COMMIT; }

View File

@ -0,0 +1,30 @@
# Project Manager test
#
# Ensure that the person who is on the project as a manager
# is flagged as a project manager in the person table.
#
# Any overlap between the transactions must cause a serialization failure.
setup
{
CREATE TABLE person (person_id int NOT NULL PRIMARY KEY, name text NOT NULL, is_project_manager bool NOT NULL);
INSERT INTO person VALUES (1, 'Robert Haas', true);
CREATE TABLE project (project_no int NOT NULL PRIMARY KEY, description text NOT NULL, project_manager int NOT NULL);
}
teardown
{
DROP TABLE person, project;
}
session "s1"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "rx1" { SELECT count(*) FROM person WHERE person_id = 1 AND is_project_manager; }
step "wy1" { INSERT INTO project VALUES (101, 'Build Great Wall', 1); }
step "c1" { COMMIT; }
session "s2"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "ry2" { SELECT count(*) FROM project WHERE project_manager = 1; }
step "wx2" { UPDATE person SET is_project_manager = false WHERE person_id = 1; }
step "c2" { COMMIT; }

View File

@ -0,0 +1,47 @@
# Daily Report of Receipts test.
#
# This test doesn't persist a bad state in the database; rather, it
# provides a view of the data which is not consistent with any
# order of execution of the serializable transactions. It
# demonstrates a situation where the deposit date for receipts could
# be changed and a report of the closed day's receipts subsequently
# run which will miss a receipt from the date which has been closed.
#
# There are only six permuations which must cause a serialization failure.
# Failure cases are where s1 overlaps both s2 and s3, but s2 commits before
# s3 executes its first SELECT.
#
# As long as s3 is declared READ ONLY there should be no false positives.
# If s3 were changed to READ WRITE, we would currently expect 42 false
# positives. Further work dealing with de facto READ ONLY transactions
# may be able to reduce or eliminate those false positives.
setup
{
CREATE TABLE ctl (k text NOT NULL PRIMARY KEY, deposit_date date NOT NULL);
INSERT INTO ctl VALUES ('receipt', DATE '2008-12-22');
CREATE TABLE receipt (receipt_no int NOT NULL PRIMARY KEY, deposit_date date NOT NULL, amount numeric(13,2));
INSERT INTO receipt VALUES (1, (SELECT deposit_date FROM ctl WHERE k = 'receipt'), 1.00);
INSERT INTO receipt VALUES (2, (SELECT deposit_date FROM ctl WHERE k = 'receipt'), 2.00);
}
teardown
{
DROP TABLE ctl, receipt;
}
session "s1"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "rxwy1" { INSERT INTO receipt VALUES (3, (SELECT deposit_date FROM ctl WHERE k = 'receipt'), 4.00); }
step "c1" { COMMIT; }
session "s2"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "wx2" { UPDATE ctl SET deposit_date = DATE '2008-12-23' WHERE k = 'receipt'; }
step "c2" { COMMIT; }
session "s3"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE, READ ONLY; }
step "rx3" { SELECT * FROM ctl WHERE k = 'receipt'; }
step "ry3" { SELECT * FROM receipt WHERE deposit_date = DATE '2008-12-22'; }
step "c3" { COMMIT; }

View File

@ -0,0 +1,32 @@
# Referential Integrity test
#
# The assumption here is that the application code issuing the SELECT
# to test for the presence or absence of a related record would do the
# right thing -- this script doesn't include that logic.
#
# Any overlap between the transactions must cause a serialization failure.
setup
{
CREATE TABLE a (i int PRIMARY KEY);
CREATE TABLE b (a_id int);
INSERT INTO a VALUES (1);
}
teardown
{
DROP TABLE a, b;
}
session "s1"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "rx1" { SELECT i FROM a WHERE i = 1; }
step "wy1" { INSERT INTO b VALUES (1); }
step "c1" { COMMIT; }
session "s2"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "rx2" { SELECT i FROM a WHERE i = 1; }
step "ry2" { SELECT a_id FROM b WHERE a_id = 1; }
step "wx2" { DELETE FROM a WHERE i = 1; }
step "c2" { COMMIT; }

View File

@ -0,0 +1,53 @@
# RI Trigger test
#
# Test trigger-based referential integrity enforcement.
#
# Any overlap between the transactions must cause a serialization failure.
setup
{
CREATE TABLE parent (parent_id SERIAL NOT NULL PRIMARY KEY);
CREATE TABLE child (child_id SERIAL NOT NULL PRIMARY KEY, parent_id INTEGER NOT NULL);
CREATE FUNCTION ri_parent() RETURNS TRIGGER LANGUAGE PLPGSQL AS $body$
BEGIN
PERFORM TRUE FROM child WHERE parent_id = OLD.parent_id;
IF FOUND THEN
RAISE SQLSTATE '23503' USING MESSAGE = 'child row exists';
END IF;
IF TG_OP = 'DELETE' THEN
RETURN OLD;
END IF;
RETURN NEW;
END;
$body$;
CREATE TRIGGER ri_parent BEFORE UPDATE OR DELETE ON parent FOR EACH ROW EXECUTE PROCEDURE ri_parent();
CREATE FUNCTION ri_child() RETURNS TRIGGER LANGUAGE PLPGSQL AS $body$
BEGIN
PERFORM TRUE FROM parent WHERE parent_id = NEW.parent_id;
IF NOT FOUND THEN
RAISE SQLSTATE '23503' USING MESSAGE = 'parent row missing';
END IF;
RETURN NEW;
END;
$body$;
CREATE TRIGGER ri_child BEFORE INSERT OR UPDATE ON child FOR EACH ROW EXECUTE PROCEDURE ri_child();
INSERT INTO parent VALUES(0);
}
teardown
{
DROP TABLE parent, child;
DROP FUNCTION ri_parent();
DROP FUNCTION ri_child();
}
session "s1"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "wxry1" { INSERT INTO child (parent_id) VALUES (0); }
step "c1" { COMMIT; }
session "s2"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "r2" { SELECT TRUE; }
step "wyrx2" { DELETE FROM parent WHERE parent_id = 0; }
step "c2" { COMMIT; }

View File

@ -0,0 +1,30 @@
# Write skew test.
#
# This test has two serializable transactions: one which updates all
# 'apple' rows to 'pear' and one which updates all 'pear' rows to
# 'apple'. If these were serialized (run one at a time) either
# value could be present, but not both. One must be rolled back to
# prevent the write skew anomaly.
#
# Any overlap between the transactions must cause a serialization failure.
setup
{
CREATE TABLE test (i int PRIMARY KEY, t text);
INSERT INTO test VALUES (5, 'apple'), (7, 'pear'), (11, 'banana');
}
teardown
{
DROP TABLE test;
}
session "s1"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "rwx1" { UPDATE test SET t = 'apple' WHERE t = 'pear'; }
step "c1" { COMMIT; }
session "s2"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "rwx2" { UPDATE test SET t = 'pear' WHERE t = 'apple'}
step "c2" { COMMIT; }

View File

@ -0,0 +1,38 @@
# Temporal Range Integrity test
#
# Snapshot integrity fails with simple referential integrity tests,
# but those don't make for good demonstrations because people just
# say that foreign key definitions should be used instead. There
# are many integrity tests which are conceptually very similar but
# don't have built-in support which will fail when used in triggers.
# This is intended to illustrate such cases. It is obviously very
# hard to exercise all these permutations when the code is actually
# in a trigger; this test pulls what would normally be inside of
# triggers out to the top level to control the permutations.
#
# Any overlap between the transactions must cause a serialization failure.
setup
{
CREATE TABLE statute (statute_cite text NOT NULL, eff_date date NOT NULL, exp_date date, CONSTRAINT statute_pkey PRIMARY KEY (statute_cite, eff_date));
INSERT INTO statute VALUES ('123.45(1)a', DATE '2008-01-01', NULL);
CREATE TABLE offense (offense_no int NOT NULL, statute_cite text NOT NULL, offense_date date NOT NULL, CONSTRAINT offense_pkey PRIMARY KEY (offense_no));
}
teardown
{
DROP TABLE statute, offense;
}
session "s1"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "rx1" { SELECT count(*) FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date <= DATE '2009-05-15' AND (exp_date IS NULL OR exp_date > DATE '2009-05-15'); }
step "wy1" { INSERT INTO offense VALUES (1, '123.45(1)a', DATE '2009-05-15'); }
step "c1" { COMMIT; }
session "s2"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "ry2" { SELECT count(*) FROM offense WHERE statute_cite = '123.45(1)a' AND offense_date >= DATE '2008-01-01'; }
step "wx2" { DELETE FROM statute WHERE statute_cite = '123.45(1)a' AND eff_date = DATE '2008-01-01'; }
step "c2" { COMMIT; }

View File

@ -0,0 +1,28 @@
# Total Cash test
#
# Another famous test of snapshot isolation anomaly.
#
# Any overlap between the transactions must cause a serialization failure.
setup
{
CREATE TABLE accounts (accountid text NOT NULL PRIMARY KEY, balance numeric not null);
INSERT INTO accounts VALUES ('checking', 600),('savings',600);
}
teardown
{
DROP TABLE accounts;
}
session "s1"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "wx1" { UPDATE accounts SET balance = balance - 200 WHERE accountid = 'checking'; }
step "rxy1" { SELECT SUM(balance) FROM accounts; }
step "c1" { COMMIT; }
session "s2"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "wy2" { UPDATE accounts SET balance = balance - 200 WHERE accountid = 'savings'; }
step "rxy2" { SELECT SUM(balance) FROM accounts; }
step "c2" { COMMIT; }

View File

@ -0,0 +1,40 @@
# Two IDs test
#
# Small, simple test showing read-only anomalies.
#
# There are only four permuations which must cause a serialization failure.
# Required failure cases are where s2 overlaps both s1 and s3, but s1
# commits before s3 executes its first SELECT.
#
# If s3 were declared READ ONLY there would be no false positives.
# With s3 defaulting to READ WRITE, we currently expect 12 false
# positives. Further work dealing with de facto READ ONLY transactions
# may be able to reduce or eliminate those false positives.
setup
{
create table D1 (id int not null);
create table D2 (id int not null);
insert into D1 values (1);
insert into D2 values (1);
}
teardown
{
DROP TABLE D1, D2;
}
session "s1"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "wx1" { update D1 set id = id + 1; }
step "c1" { COMMIT; }
session "s2"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "rxwy2" { update D2 set id = (select id+1 from D1); }
step "c2" { COMMIT; }
session "s3"
setup { BEGIN ISOLATION LEVEL SERIALIZABLE; }
step "ry3" { select id from D2; }
step "c3" { COMMIT; }

View File

@ -0,0 +1,103 @@
%{
/*-------------------------------------------------------------------------
*
* specscanner.l
* a lexical scanner for an isolation test specification
*
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
*-------------------------------------------------------------------------
*/
static int yyline = 1; /* line number for error reporting */
static char litbuf[1024];
static int litbufpos = 0;
static void addlitchar(const char c);
%}
%option 8bit
%option never-interactive
%option nodefault
%option noinput
%option nounput
%option noyywrap
%option prefix="spec_yy"
%x sql
%x qstr
non_newline [^\n\r]
space [ \t\n\r\f]
comment ("#"{non_newline}*)
whitespace ({space}+|{comment})
%%
permutation { return(PERMUTATION); }
session { return(SESSION); }
setup { return(SETUP); }
step { return(STEP); }
teardown { return(TEARDOWN); }
[\n] { yyline++; }
{whitespace} {
/* ignore */
}
\" {
litbufpos = 0;
BEGIN(qstr);
}
<qstr>\" {
litbuf[litbufpos] = '\0';
yylval.str = strdup(litbuf);
BEGIN(INITIAL);
return(string);
}
<qstr>. { addlitchar(yytext[0]); }
"{" {
litbufpos = 0;
BEGIN(sql);
}
<sql>"}" {
litbuf[litbufpos] = '\0';
yylval.str = strdup(litbuf);
BEGIN(INITIAL);
return(sqlblock);
}
<sql>[^}] { addlitchar(yytext[0]);}
. {
fprintf(stderr, "syntax error at line %d: unexpected character \"%s\"\n", yyline, yytext);
exit(1);
}
%%
static void
addlitchar(const char c)
{
if (litbufpos >= sizeof(litbuf) - 1)
{
fprintf(stderr, "SQL step too long\n");
exit(1);
}
litbuf[litbufpos++] = c;
}
void
yyerror(const char *message)
{
fprintf(stderr, "%s at line %d\n", message, yyline);
exit(1);
}

View File

@ -9,7 +9,7 @@
CREATE TABLE pxtest1 (foobar VARCHAR(10));
INSERT INTO pxtest1 VALUES ('aaa');
-- Test PREPARE TRANSACTION
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
UPDATE pxtest1 SET foobar = 'bbb' WHERE foobar = 'aaa';
SELECT * FROM pxtest1;
foobar
@ -45,7 +45,7 @@ SELECT gid FROM pg_prepared_xacts;
(0 rows)
-- Test COMMIT PREPARED
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
INSERT INTO pxtest1 VALUES ('ddd');
SELECT * FROM pxtest1;
foobar
@ -70,7 +70,7 @@ SELECT * FROM pxtest1;
(2 rows)
-- Test duplicate gids
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
UPDATE pxtest1 SET foobar = 'eee' WHERE foobar = 'ddd';
SELECT * FROM pxtest1;
foobar
@ -86,7 +86,7 @@ SELECT gid FROM pg_prepared_xacts;
foo3
(1 row)
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
INSERT INTO pxtest1 VALUES ('fff');
SELECT * FROM pxtest1;
foobar
@ -117,7 +117,7 @@ SELECT * FROM pxtest1;
-- Clean up
DROP TABLE pxtest1;
-- Test subtransactions
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
CREATE TABLE pxtest2 (a int);
INSERT INTO pxtest2 VALUES (1);
SAVEPOINT a;
@ -128,7 +128,7 @@ BEGIN;
PREPARE TRANSACTION 'regress-one';
CREATE TABLE pxtest3(fff int);
-- Test shared invalidation
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
DROP TABLE pxtest3;
CREATE TABLE pxtest4 (a int);
INSERT INTO pxtest4 VALUES (1);

View File

@ -9,7 +9,7 @@
CREATE TABLE pxtest1 (foobar VARCHAR(10));
INSERT INTO pxtest1 VALUES ('aaa');
-- Test PREPARE TRANSACTION
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
UPDATE pxtest1 SET foobar = 'bbb' WHERE foobar = 'aaa';
SELECT * FROM pxtest1;
foobar
@ -47,7 +47,7 @@ SELECT gid FROM pg_prepared_xacts;
(0 rows)
-- Test COMMIT PREPARED
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
INSERT INTO pxtest1 VALUES ('ddd');
SELECT * FROM pxtest1;
foobar
@ -74,7 +74,7 @@ SELECT * FROM pxtest1;
(1 row)
-- Test duplicate gids
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
UPDATE pxtest1 SET foobar = 'eee' WHERE foobar = 'ddd';
SELECT * FROM pxtest1;
foobar
@ -90,7 +90,7 @@ SELECT gid FROM pg_prepared_xacts;
-----
(0 rows)
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
INSERT INTO pxtest1 VALUES ('fff');
SELECT * FROM pxtest1;
foobar
@ -120,7 +120,7 @@ SELECT * FROM pxtest1;
-- Clean up
DROP TABLE pxtest1;
-- Test subtransactions
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
CREATE TABLE pxtest2 (a int);
INSERT INTO pxtest2 VALUES (1);
SAVEPOINT a;
@ -133,7 +133,7 @@ ERROR: prepared transactions are disabled
HINT: Set max_prepared_transactions to a nonzero value.
CREATE TABLE pxtest3(fff int);
-- Test shared invalidation
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
DROP TABLE pxtest3;
CREATE TABLE pxtest4 (a int);
INSERT INTO pxtest4 VALUES (1);

View File

@ -44,7 +44,7 @@ SELECT * FROM aggtest;
CREATE TABLE writetest (a int);
CREATE TEMPORARY TABLE temptest (a int);
BEGIN;
SET TRANSACTION READ ONLY; -- ok
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, READ ONLY, DEFERRABLE; -- ok
SELECT * FROM writetest; -- ok
a
---

View File

@ -14,7 +14,7 @@ INSERT INTO pxtest1 VALUES ('aaa');
-- Test PREPARE TRANSACTION
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
UPDATE pxtest1 SET foobar = 'bbb' WHERE foobar = 'aaa';
SELECT * FROM pxtest1;
PREPARE TRANSACTION 'foo1';
@ -33,7 +33,7 @@ SELECT gid FROM pg_prepared_xacts;
-- Test COMMIT PREPARED
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
INSERT INTO pxtest1 VALUES ('ddd');
SELECT * FROM pxtest1;
PREPARE TRANSACTION 'foo2';
@ -45,14 +45,14 @@ COMMIT PREPARED 'foo2';
SELECT * FROM pxtest1;
-- Test duplicate gids
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
UPDATE pxtest1 SET foobar = 'eee' WHERE foobar = 'ddd';
SELECT * FROM pxtest1;
PREPARE TRANSACTION 'foo3';
SELECT gid FROM pg_prepared_xacts;
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
INSERT INTO pxtest1 VALUES ('fff');
SELECT * FROM pxtest1;
@ -69,7 +69,7 @@ SELECT * FROM pxtest1;
DROP TABLE pxtest1;
-- Test subtransactions
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
CREATE TABLE pxtest2 (a int);
INSERT INTO pxtest2 VALUES (1);
SAVEPOINT a;
@ -82,7 +82,7 @@ PREPARE TRANSACTION 'regress-one';
CREATE TABLE pxtest3(fff int);
-- Test shared invalidation
BEGIN;
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
DROP TABLE pxtest3;
CREATE TABLE pxtest4 (a int);
INSERT INTO pxtest4 VALUES (1);

View File

@ -40,7 +40,7 @@ CREATE TABLE writetest (a int);
CREATE TEMPORARY TABLE temptest (a int);
BEGIN;
SET TRANSACTION READ ONLY; -- ok
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, READ ONLY, DEFERRABLE; -- ok
SELECT * FROM writetest; -- ok
SET TRANSACTION READ WRITE; --fail
COMMIT;

View File

@ -795,6 +795,7 @@ LINE
LOCALLOCK
LOCALLOCKOWNER
LOCALLOCKTAG
LOCALPREDICATELOCK
LOCK
LOCKMASK
LOCKMETHODID
@ -931,6 +932,7 @@ OffsetNumber
OffsetVarNodes_context
Oid
OidOptions
OldSerXidControlData
OldToNewMapping
OldToNewMappingData
OldTriggerInfo
@ -1081,6 +1083,10 @@ PQconninfoOption
PQnoticeProcessor
PQnoticeReceiver
PQprintOpt
PREDICATELOCK
PREDICATELOCKTAG
PREDICATELOCKTARGET
PREDICATELOCKTARGETTAG
PROC
PROCESS_INFORMATION
PROCLOCK
@ -1202,6 +1208,9 @@ PreParseColumnRefHook
PredClass
PredIterInfo
PredIterInfoData
PredXactListData
PredXactListElementData
PredicateLockData
PrepareStmt
PreparedParamsData
PreparedStatement
@ -1268,6 +1277,8 @@ RSA
RTEKind
RUHashEntry
RUHashEntryData
RWConflictData
RWConflictPoolHeaderData
RangeFunction
RangeQueryClause
RangeSubselect
@ -1350,6 +1361,10 @@ RunningTransactionsData
SC_HANDLE
SECURITY_ATTRIBUTES
SEG
SERIALIZABLEXACT
SERIALIZABLEXACTTAG
SERIALIZABLEXID
SERIALIZABLEXIDTAG
SERVICE_STATUS
SERVICE_STATUS_HANDLE
SERVICE_TABLE_ENTRY
@ -1595,6 +1610,9 @@ TwoPhaseCallback
TwoPhaseFileHeader
TwoPhaseLockRecord
TwoPhasePgStatRecord
TwoPhasePredicateLockRecord
TwoPhasePredicateRecord
TwoPhasePredicateXactRecord
TwoPhaseRecordOnDisk
TwoPhaseRmgrId
TwoPhaseStateData