2010-09-20 22:08:53 +02:00
|
|
|
<!-- doc/src/sgml/mvcc.sgml -->
|
2000-03-31 05:27:42 +02:00
|
|
|
|
1999-05-26 19:27:39 +02:00
|
|
|
<chapter id="mvcc">
|
2002-08-05 21:43:31 +02:00
|
|
|
<title>Concurrency Control</title>
|
1999-05-26 19:27:39 +02:00
|
|
|
|
2001-05-13 00:51:36 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>concurrency</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2002-11-11 21:14:04 +01:00
|
|
|
<para>
|
2002-11-15 04:11:18 +01:00
|
|
|
This chapter describes the behavior of the
|
|
|
|
<productname>PostgreSQL</productname> database system when two or
|
|
|
|
more sessions try to access the same data at the same time. The
|
|
|
|
goals in that situation are to allow efficient access for all
|
|
|
|
sessions while maintaining strict data integrity. Every developer
|
|
|
|
of database applications should be familiar with the topics covered
|
|
|
|
in this chapter.
|
2002-11-11 21:14:04 +01:00
|
|
|
</para>
|
1999-05-26 19:27:39 +02:00
|
|
|
|
2000-09-29 22:21:34 +02:00
|
|
|
<sect1 id="mvcc-intro">
|
1999-05-26 19:27:39 +02:00
|
|
|
<title>Introduction</title>
|
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
<indexterm>
|
|
|
|
<primary>Multiversion Concurrency Control</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2003-08-31 19:32:24 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>MVCC</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
<indexterm>
|
|
|
|
<primary>Serializable Snapshot Isolation</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>SSI</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
1999-05-26 19:27:39 +02:00
|
|
|
<para>
|
2010-11-23 21:27:50 +01:00
|
|
|
<productname>PostgreSQL</productname> provides a rich set of tools
|
2006-09-21 01:43:22 +02:00
|
|
|
for developers to manage concurrent access to data. Internally,
|
2010-11-23 21:27:50 +01:00
|
|
|
data consistency is maintained by using a multiversion
|
|
|
|
model (Multiversion Concurrency Control, <acronym>MVCC</acronym>).
|
2013-11-13 16:14:05 +01:00
|
|
|
This means that each SQL statement sees
|
1999-05-26 19:27:39 +02:00
|
|
|
a snapshot of data (a <firstterm>database version</firstterm>)
|
|
|
|
as it was some
|
1999-05-27 17:49:08 +02:00
|
|
|
time ago, regardless of the current state of the underlying data.
|
2013-11-13 16:14:05 +01:00
|
|
|
This prevents statements from viewing inconsistent data produced
|
|
|
|
by concurrent transactions performing updates on the same
|
1999-05-26 19:27:39 +02:00
|
|
|
data rows, providing <firstterm>transaction isolation</firstterm>
|
2006-09-21 01:43:22 +02:00
|
|
|
for each database session. <acronym>MVCC</acronym>, by eschewing
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
the locking methodologies of traditional database systems,
|
2010-11-23 21:27:50 +01:00
|
|
|
minimizes lock contention in order to allow for reasonable
|
2006-09-21 01:43:22 +02:00
|
|
|
performance in multiuser environments.
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2009-04-27 18:27:36 +02:00
|
|
|
The main advantage of using the <acronym>MVCC</acronym> model of
|
2003-02-19 05:06:28 +01:00
|
|
|
concurrency control rather than locking is that in
|
|
|
|
<acronym>MVCC</acronym> locks acquired for querying (reading) data
|
|
|
|
do not conflict with locks acquired for writing data, and so
|
2002-11-15 04:11:18 +01:00
|
|
|
reading never blocks writing and writing never blocks reading.
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
<productname>PostgreSQL</productname> maintains this guarantee
|
|
|
|
even when providing the strictest level of transaction
|
|
|
|
isolation through the use of an innovative <firstterm>Serializable
|
|
|
|
Snapshot Isolation</firstterm> (<acronym>SSI</acronym>) level.
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
2002-05-30 22:45:18 +02:00
|
|
|
|
|
|
|
<para>
|
|
|
|
Table- and row-level locking facilities are also available in
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
<productname>PostgreSQL</productname> for applications which don't
|
|
|
|
generally need full transaction isolation and prefer to explicitly
|
|
|
|
manage particular points of conflict. However, proper
|
2002-11-15 04:11:18 +01:00
|
|
|
use of <acronym>MVCC</acronym> will generally provide better
|
2006-09-21 01:43:22 +02:00
|
|
|
performance than locks. In addition, application-defined advisory
|
|
|
|
locks provide a mechanism for acquiring locks that are not tied
|
|
|
|
to a single transaction.
|
2002-05-30 22:45:18 +02:00
|
|
|
</para>
|
1999-05-26 19:27:39 +02:00
|
|
|
</sect1>
|
|
|
|
|
2000-09-29 22:21:34 +02:00
|
|
|
<sect1 id="transaction-iso">
|
1999-05-26 19:27:39 +02:00
|
|
|
<title>Transaction Isolation</title>
|
|
|
|
|
2003-08-31 19:32:24 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>transaction isolation</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
1999-05-26 19:27:39 +02:00
|
|
|
<para>
|
2002-12-18 21:40:24 +01:00
|
|
|
The <acronym>SQL</acronym> standard defines four levels of
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
transaction isolation. The most strict is Serializable,
|
|
|
|
which is defined by the standard in a paragraph which says that any
|
|
|
|
concurrent execution of a set of Serializable transactions is guaranteed
|
|
|
|
to produce the same effect as running them one at a time in some order.
|
|
|
|
The other three levels are defined in terms of phenomena, resulting from
|
|
|
|
interaction between concurrent transactions, which must not occur at
|
|
|
|
each level. The standard notes that due to the definition of
|
|
|
|
Serializable, none of these phenomena are possible at that level. (This
|
|
|
|
is hardly surprising -- if the effect of the transactions must be
|
|
|
|
consistent with having been run one at a time, how could you see any
|
|
|
|
phenomena caused by interactions?)
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2011-12-26 23:09:50 +01:00
|
|
|
The phenomena which are prohibited at various levels are:
|
1999-05-26 19:27:39 +02:00
|
|
|
|
|
|
|
<variablelist>
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
2002-11-11 21:14:04 +01:00
|
|
|
dirty read
|
|
|
|
<indexterm><primary>dirty read</primary></indexterm>
|
1999-05-26 19:27:39 +02:00
|
|
|
</term>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2006-09-18 00:50:31 +02:00
|
|
|
A transaction reads data written by a concurrent uncommitted transaction.
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
2002-11-11 21:14:04 +01:00
|
|
|
nonrepeatable read
|
|
|
|
<indexterm><primary>nonrepeatable read</primary></indexterm>
|
1999-05-26 19:27:39 +02:00
|
|
|
</term>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2006-09-18 00:50:31 +02:00
|
|
|
A transaction re-reads data it has previously read and finds that data
|
|
|
|
has been modified by another transaction (that committed since the
|
|
|
|
initial read).
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
|
|
|
phantom read
|
2002-11-11 21:14:04 +01:00
|
|
|
<indexterm><primary>phantom read</primary></indexterm>
|
1999-05-26 19:27:39 +02:00
|
|
|
</term>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2006-09-18 00:50:31 +02:00
|
|
|
A transaction re-executes a query returning a set of rows that satisfy a
|
|
|
|
search condition and finds that the set of rows satisfying the condition
|
|
|
|
has changed due to another recently-committed transaction.
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
2015-05-11 18:02:10 +02:00
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
|
|
|
serialization anomaly
|
|
|
|
<indexterm><primary>serialization anomaly</primary></indexterm>
|
|
|
|
</term>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
The result of successfully committing a group of transactions
|
|
|
|
is inconsistent with all possible orderings of running those
|
|
|
|
transactions one at a time.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
1999-05-26 19:27:39 +02:00
|
|
|
</variablelist>
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2001-05-13 00:51:36 +02:00
|
|
|
<indexterm>
|
2003-08-31 19:32:24 +02:00
|
|
|
<primary>transaction isolation level</primary>
|
2001-05-13 00:51:36 +02:00
|
|
|
</indexterm>
|
2015-05-11 18:02:10 +02:00
|
|
|
The SQL standard and PostgreSQL-implemented transaction isolation levels
|
2017-11-23 15:39:47 +01:00
|
|
|
are described in <xref linkend="mvcc-isolevel-table"/>.
|
2002-11-11 21:14:04 +01:00
|
|
|
</para>
|
1999-05-27 17:49:08 +02:00
|
|
|
|
2001-10-09 20:46:00 +02:00
|
|
|
<table tocentry="1" id="mvcc-isolevel-table">
|
2015-05-11 18:02:10 +02:00
|
|
|
<title>Transaction Isolation Levels</title>
|
|
|
|
<tgroup cols="5">
|
1999-05-27 17:49:08 +02:00
|
|
|
<thead>
|
|
|
|
<row>
|
2006-09-18 00:50:31 +02:00
|
|
|
<entry>
|
2001-04-20 17:52:33 +02:00
|
|
|
Isolation Level
|
2006-09-18 00:50:31 +02:00
|
|
|
</entry>
|
|
|
|
<entry>
|
|
|
|
Dirty Read
|
|
|
|
</entry>
|
|
|
|
<entry>
|
|
|
|
Nonrepeatable Read
|
|
|
|
</entry>
|
|
|
|
<entry>
|
|
|
|
Phantom Read
|
|
|
|
</entry>
|
2015-05-11 18:02:10 +02:00
|
|
|
<entry>
|
|
|
|
Serialization Anomaly
|
|
|
|
</entry>
|
1999-05-27 17:49:08 +02:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
<tbody>
|
|
|
|
<row>
|
2006-09-18 00:50:31 +02:00
|
|
|
<entry>
|
|
|
|
Read uncommitted
|
|
|
|
</entry>
|
2015-05-11 18:02:10 +02:00
|
|
|
<entry>
|
|
|
|
Allowed, but not in PG
|
|
|
|
</entry>
|
2006-09-18 00:50:31 +02:00
|
|
|
<entry>
|
|
|
|
Possible
|
|
|
|
</entry>
|
|
|
|
<entry>
|
|
|
|
Possible
|
|
|
|
</entry>
|
|
|
|
<entry>
|
|
|
|
Possible
|
|
|
|
</entry>
|
1999-05-27 17:49:08 +02:00
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2006-09-18 00:50:31 +02:00
|
|
|
<entry>
|
|
|
|
Read committed
|
|
|
|
</entry>
|
|
|
|
<entry>
|
|
|
|
Not possible
|
|
|
|
</entry>
|
|
|
|
<entry>
|
|
|
|
Possible
|
|
|
|
</entry>
|
|
|
|
<entry>
|
|
|
|
Possible
|
|
|
|
</entry>
|
2015-05-11 18:02:10 +02:00
|
|
|
<entry>
|
|
|
|
Possible
|
|
|
|
</entry>
|
1999-05-27 17:49:08 +02:00
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2006-09-18 00:50:31 +02:00
|
|
|
<entry>
|
|
|
|
Repeatable read
|
|
|
|
</entry>
|
|
|
|
<entry>
|
|
|
|
Not possible
|
|
|
|
</entry>
|
|
|
|
<entry>
|
|
|
|
Not possible
|
|
|
|
</entry>
|
2015-05-11 18:02:10 +02:00
|
|
|
<entry>
|
|
|
|
Allowed, but not in PG
|
|
|
|
</entry>
|
2006-09-18 00:50:31 +02:00
|
|
|
<entry>
|
|
|
|
Possible
|
|
|
|
</entry>
|
1999-05-27 17:49:08 +02:00
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2006-09-18 00:50:31 +02:00
|
|
|
<entry>
|
|
|
|
Serializable
|
|
|
|
</entry>
|
|
|
|
<entry>
|
|
|
|
Not possible
|
|
|
|
</entry>
|
|
|
|
<entry>
|
|
|
|
Not possible
|
|
|
|
</entry>
|
|
|
|
<entry>
|
|
|
|
Not possible
|
|
|
|
</entry>
|
2015-05-11 18:02:10 +02:00
|
|
|
<entry>
|
|
|
|
Not possible
|
|
|
|
</entry>
|
1999-05-27 17:49:08 +02:00
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
1999-05-26 19:27:39 +02:00
|
|
|
|
2000-10-11 19:38:36 +02:00
|
|
|
<para>
|
2015-05-11 18:02:10 +02:00
|
|
|
In <productname>PostgreSQL</productname>, you can request any of
|
|
|
|
the four standard transaction isolation levels, but internally only
|
|
|
|
three distinct isolation levels are implemented, i.e. PostgreSQL's
|
|
|
|
Read Uncommitted mode behaves like Read Committed. This is because
|
|
|
|
it is the only sensible way to map the standard isolation levels to
|
|
|
|
PostgreSQL's multiversion concurrency control architecture.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The table also shows that PostgreSQL's Repeatable Read implementation
|
|
|
|
does not allow phantom reads. Stricter behavior is permitted by the
|
|
|
|
SQL standard: the four isolation levels only define which phenomena
|
2017-10-09 03:44:17 +02:00
|
|
|
must not happen, not which phenomena <emphasis>must</emphasis> happen.
|
2015-05-11 18:02:10 +02:00
|
|
|
The behavior of the available isolation levels is detailed in the
|
|
|
|
following subsections.
|
2003-11-06 23:08:15 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
To set the transaction isolation level of a transaction, use the
|
2017-11-23 15:39:47 +01:00
|
|
|
command <xref linkend="sql-set-transaction"/>.
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
|
|
|
|
2012-08-21 17:08:15 +02:00
|
|
|
<important>
|
|
|
|
<para>
|
|
|
|
Some <productname>PostgreSQL</productname> data types and functions have
|
2013-05-21 03:13:13 +02:00
|
|
|
special rules regarding transactional behavior. In particular, changes
|
|
|
|
made to a sequence (and therefore the counter of a
|
|
|
|
column declared using <type>serial</type>) are immediately visible
|
2012-08-21 17:08:15 +02:00
|
|
|
to all other transactions and are not rolled back if the transaction
|
2017-11-23 15:39:47 +01:00
|
|
|
that made the changes aborts. See <xref linkend="functions-sequence"/>
|
|
|
|
and <xref linkend="datatype-serial"/>.
|
2012-08-21 17:08:15 +02:00
|
|
|
</para>
|
|
|
|
</important>
|
|
|
|
|
2002-11-11 21:14:04 +01:00
|
|
|
<sect2 id="xact-read-committed">
|
1999-05-26 19:27:39 +02:00
|
|
|
<title>Read Committed Isolation Level</title>
|
|
|
|
|
2001-05-13 00:51:36 +02:00
|
|
|
<indexterm>
|
2003-08-31 19:32:24 +02:00
|
|
|
<primary>transaction isolation level</primary>
|
2001-05-13 00:51:36 +02:00
|
|
|
<secondary>read committed</secondary>
|
|
|
|
</indexterm>
|
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
<indexterm>
|
|
|
|
<primary>read committed</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
1999-05-26 19:27:39 +02:00
|
|
|
<para>
|
2009-02-04 17:05:50 +01:00
|
|
|
<firstterm>Read Committed</firstterm> is the default isolation
|
|
|
|
level in <productname>PostgreSQL</productname>. When a transaction
|
|
|
|
uses this isolation level, a <command>SELECT</command> query
|
2017-10-09 03:44:17 +02:00
|
|
|
(without a <literal>FOR UPDATE/SHARE</literal> clause) sees only data
|
2009-02-04 17:05:50 +01:00
|
|
|
committed before the query began; it never sees either uncommitted
|
|
|
|
data or changes committed during query execution by concurrent
|
|
|
|
transactions. In effect, a <command>SELECT</command> query sees
|
2009-06-17 23:58:49 +02:00
|
|
|
a snapshot of the database as of the instant the query begins to
|
2009-02-04 17:05:50 +01:00
|
|
|
run. However, <command>SELECT</command> does see the effects
|
|
|
|
of previous updates executed within its own transaction, even
|
|
|
|
though they are not yet committed. Also note that two successive
|
|
|
|
<command>SELECT</command> commands can see different data, even
|
|
|
|
though they are within a single transaction, if other transactions
|
2013-11-27 21:34:12 +01:00
|
|
|
commit changes after the first <command>SELECT</command> starts and
|
|
|
|
before the second <command>SELECT</command> starts.
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2005-04-28 23:47:18 +02:00
|
|
|
<command>UPDATE</command>, <command>DELETE</command>, <command>SELECT
|
|
|
|
FOR UPDATE</command>, and <command>SELECT FOR SHARE</command> commands
|
|
|
|
behave the same as <command>SELECT</command>
|
2002-05-30 22:45:18 +02:00
|
|
|
in terms of searching for target rows: they will only find target rows
|
2009-06-17 23:58:49 +02:00
|
|
|
that were committed as of the command start time. However, such a target
|
Update documentation on may/can/might:
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
Also update two error messages mentioned in the documenation to match.
2007-01-31 21:56:20 +01:00
|
|
|
row might have already been updated (or deleted or locked) by
|
2002-05-30 22:45:18 +02:00
|
|
|
another concurrent transaction by the time it is found. In this case, the
|
|
|
|
would-be updater will wait for the first updating transaction to commit or
|
|
|
|
roll back (if it is still in progress). If the first updater rolls back,
|
|
|
|
then its effects are negated and the second updater can proceed with
|
|
|
|
updating the originally found row. If the first updater commits, the
|
|
|
|
second updater will ignore the row if the first updater deleted it,
|
|
|
|
otherwise it will attempt to apply its operation to the updated version of
|
2017-10-09 03:44:17 +02:00
|
|
|
the row. The search condition of the command (the <literal>WHERE</literal> clause) is
|
2002-05-30 22:45:18 +02:00
|
|
|
re-evaluated to see if the updated version of the row still matches the
|
2009-02-04 17:05:50 +01:00
|
|
|
search condition. If so, the second updater proceeds with its operation
|
|
|
|
using the updated version of the row. In the case of
|
2005-04-28 23:47:18 +02:00
|
|
|
<command>SELECT FOR UPDATE</command> and <command>SELECT FOR
|
2009-02-04 17:05:50 +01:00
|
|
|
SHARE</command>, this means it is the updated version of the row that is
|
|
|
|
locked and returned to the client.
|
2002-05-30 22:45:18 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>INSERT</command> with an <literal>ON CONFLICT DO UPDATE</literal> clause
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
behaves similarly. In Read Committed mode, each row proposed for insertion
|
|
|
|
will either insert or update. Unless there are unrelated errors, one of
|
|
|
|
those two outcomes is guaranteed. If a conflict originates in another
|
|
|
|
transaction whose effects are not yet visible to the <command>INSERT
|
|
|
|
</command>, the <command>UPDATE</command> clause will affect that row,
|
2017-10-09 03:44:17 +02:00
|
|
|
even though possibly <emphasis>no</emphasis> version of that row is
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
conventionally visible to the command.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
<command>INSERT</command> with an <literal>ON CONFLICT DO
|
2017-10-09 03:44:17 +02:00
|
|
|
NOTHING</literal> clause may have insertion not proceed for a row due to
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
the outcome of another transaction whose effects are not visible
|
|
|
|
to the <command>INSERT</command> snapshot. Again, this is only
|
|
|
|
the case in Read Committed mode.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Because of the above rules, it is possible for an updating command to see
|
|
|
|
an inconsistent snapshot: it can see the effects of concurrent updating
|
2009-02-04 17:05:50 +01:00
|
|
|
commands on the same rows it is trying to update, but it
|
2003-03-13 02:30:29 +01:00
|
|
|
does not see effects of those commands on other rows in the database.
|
|
|
|
This behavior makes Read Committed mode unsuitable for commands that
|
2009-02-04 17:05:50 +01:00
|
|
|
involve complex search conditions; however, it is just right for simpler
|
2002-05-30 22:45:18 +02:00
|
|
|
cases. For example, consider updating bank balances with transactions
|
2009-02-04 17:05:50 +01:00
|
|
|
like:
|
2002-05-30 22:45:18 +02:00
|
|
|
|
|
|
|
<screen>
|
|
|
|
BEGIN;
|
|
|
|
UPDATE accounts SET balance = balance + 100.00 WHERE acctnum = 12345;
|
|
|
|
UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 7534;
|
|
|
|
COMMIT;
|
|
|
|
</screen>
|
|
|
|
|
|
|
|
If two such transactions concurrently try to change the balance of account
|
2009-04-27 18:27:36 +02:00
|
|
|
12345, we clearly want the second transaction to start with the updated
|
2003-03-13 02:30:29 +01:00
|
|
|
version of the account's row. Because each command is affecting only a
|
2002-05-30 22:45:18 +02:00
|
|
|
predetermined row, letting it see the updated version of the row does
|
|
|
|
not create any troublesome inconsistency.
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2009-02-04 17:05:50 +01:00
|
|
|
More complex usage can produce undesirable results in Read Committed
|
|
|
|
mode. For example, consider a <command>DELETE</command> command
|
|
|
|
operating on data that is being both added and removed from its
|
2009-04-27 18:27:36 +02:00
|
|
|
restriction criteria by another command, e.g., assume
|
2009-02-04 17:05:50 +01:00
|
|
|
<literal>website</literal> is a two-row table with
|
|
|
|
<literal>website.hits</literal> equaling <literal>9</literal> and
|
|
|
|
<literal>10</literal>:
|
|
|
|
|
|
|
|
<screen>
|
|
|
|
BEGIN;
|
|
|
|
UPDATE website SET hits = hits + 1;
|
|
|
|
-- run from another session: DELETE FROM website WHERE hits = 10;
|
|
|
|
COMMIT;
|
|
|
|
</screen>
|
|
|
|
|
|
|
|
The <command>DELETE</command> will have no effect even though
|
|
|
|
there is a <literal>website.hits = 10</literal> row before and
|
|
|
|
after the <command>UPDATE</command>. This occurs because the
|
2017-10-09 03:44:17 +02:00
|
|
|
pre-update row value <literal>9</literal> is skipped, and when the
|
2009-02-04 17:05:50 +01:00
|
|
|
<command>UPDATE</command> completes and <command>DELETE</command>
|
2017-10-09 03:44:17 +02:00
|
|
|
obtains a lock, the new row value is no longer <literal>10</literal> but
|
|
|
|
<literal>11</literal>, which no longer matches the criteria.
|
2009-02-04 17:05:50 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Because Read Committed mode starts each command with a new snapshot
|
|
|
|
that includes all transactions committed up to that instant,
|
|
|
|
subsequent commands in the same transaction will see the effects
|
|
|
|
of the committed concurrent transaction in any case. The point
|
2017-10-09 03:44:17 +02:00
|
|
|
at issue above is whether or not a <emphasis>single</emphasis> command
|
2009-02-04 17:05:50 +01:00
|
|
|
sees an absolutely consistent view of the database.
|
2002-05-30 22:45:18 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2009-02-04 17:05:50 +01:00
|
|
|
The partial transaction isolation provided by Read Committed mode
|
|
|
|
is adequate for many applications, and this mode is fast and simple
|
|
|
|
to use; however, it is not sufficient for all cases. Applications
|
|
|
|
that do complex queries and updates might require a more rigorously
|
|
|
|
consistent view of the database than Read Committed mode provides.
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
2002-11-11 21:14:04 +01:00
|
|
|
</sect2>
|
1999-05-26 19:27:39 +02:00
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
<sect2 id="xact-repeatable-read">
|
|
|
|
<title>Repeatable Read Isolation Level</title>
|
1999-05-26 19:27:39 +02:00
|
|
|
|
2001-05-13 00:51:36 +02:00
|
|
|
<indexterm>
|
2003-08-31 19:32:24 +02:00
|
|
|
<primary>transaction isolation level</primary>
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
<secondary>repeatable read</secondary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>repeatable read</primary>
|
2001-05-13 00:51:36 +02:00
|
|
|
</indexterm>
|
|
|
|
|
1999-05-26 19:27:39 +02:00
|
|
|
<para>
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
The <firstterm>Repeatable Read</firstterm> isolation level only sees
|
|
|
|
data committed before the transaction began; it never sees either
|
|
|
|
uncommitted data or changes committed during transaction execution
|
|
|
|
by concurrent transactions. (However, the query does see the
|
|
|
|
effects of previous updates executed within its own transaction,
|
|
|
|
even though they are not yet committed.) This is a stronger
|
|
|
|
guarantee than is required by the <acronym>SQL</acronym> standard
|
|
|
|
for this isolation level, and prevents all of the phenomena described
|
2017-11-23 15:39:47 +01:00
|
|
|
in <xref linkend="mvcc-isolevel-table"/> except for serialization
|
2016-05-25 18:17:08 +02:00
|
|
|
anomalies. As mentioned above, this is
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
specifically allowed by the standard, which only describes the
|
|
|
|
<emphasis>minimum</emphasis> protections each isolation level must
|
|
|
|
provide.
|
2000-10-11 19:38:36 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
This level is different from Read Committed in that a query in a
|
|
|
|
repeatable read transaction sees a snapshot as of the start of the
|
2015-03-25 01:56:09 +01:00
|
|
|
first non-transaction-control statement in the
|
2017-10-09 03:44:17 +02:00
|
|
|
<emphasis>transaction</emphasis>, not as of the start
|
2015-03-25 01:56:09 +01:00
|
|
|
of the current statement within the transaction. Thus, successive
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>SELECT</command> commands within a <emphasis>single</emphasis>
|
2009-06-17 23:58:49 +02:00
|
|
|
transaction see the same data, i.e., they do not see changes made by
|
|
|
|
other transactions that committed after their own transaction started.
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Applications using this level must be prepared to retry transactions
|
|
|
|
due to serialization failures.
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2005-04-28 23:47:18 +02:00
|
|
|
<command>UPDATE</command>, <command>DELETE</command>, <command>SELECT
|
|
|
|
FOR UPDATE</command>, and <command>SELECT FOR SHARE</command> commands
|
|
|
|
behave the same as <command>SELECT</command>
|
2002-05-30 22:45:18 +02:00
|
|
|
in terms of searching for target rows: they will only find target rows
|
2009-06-17 23:58:49 +02:00
|
|
|
that were committed as of the transaction start time. However, such a
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
target row might have already been updated (or deleted or locked) by
|
2002-05-30 22:45:18 +02:00
|
|
|
another concurrent transaction by the time it is found. In this case, the
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
repeatable read transaction will wait for the first updating transaction to commit or
|
2002-05-30 22:45:18 +02:00
|
|
|
roll back (if it is still in progress). If the first updater rolls back,
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
then its effects are negated and the repeatable read transaction can proceed
|
2002-05-30 22:45:18 +02:00
|
|
|
with updating the originally found row. But if the first updater commits
|
2005-04-28 23:47:18 +02:00
|
|
|
(and actually updated or deleted the row, not just locked it)
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
then the repeatable read transaction will be rolled back with the message
|
1999-05-26 19:27:39 +02:00
|
|
|
|
2001-11-28 21:49:10 +01:00
|
|
|
<screen>
|
2003-09-13 00:17:24 +02:00
|
|
|
ERROR: could not serialize access due to concurrent update
|
2001-11-28 21:49:10 +01:00
|
|
|
</screen>
|
1999-05-26 19:27:39 +02:00
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
because a repeatable read transaction cannot modify or lock rows changed by
|
|
|
|
other transactions after the repeatable read transaction began.
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
|
|
|
|
2000-10-11 19:38:36 +02:00
|
|
|
<para>
|
2009-04-27 18:27:36 +02:00
|
|
|
When an application receives this error message, it should abort
|
|
|
|
the current transaction and retry the whole transaction from
|
|
|
|
the beginning. The second time through, the transaction will see the
|
2000-10-11 19:38:36 +02:00
|
|
|
previously-committed change as part of its initial view of the database,
|
|
|
|
so there is no logical conflict in using the new version of the row
|
|
|
|
as the starting point for the new transaction's update.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
Update documentation on may/can/might:
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
Also update two error messages mentioned in the documenation to match.
2007-01-31 21:56:20 +01:00
|
|
|
Note that only updating transactions might need to be retried; read-only
|
2002-05-30 22:45:18 +02:00
|
|
|
transactions will never have serialization conflicts.
|
2000-10-11 19:38:36 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
The Repeatable Read mode provides a rigorous guarantee that each
|
|
|
|
transaction sees a completely stable view of the database. However,
|
|
|
|
this view will not necessarily always be consistent with some serial
|
|
|
|
(one at a time) execution of concurrent transactions of the same level.
|
|
|
|
For example, even a read only transaction at this level may see a
|
|
|
|
control record updated to show that a batch has been completed but
|
|
|
|
<emphasis>not</emphasis> see one of the detail records which is logically
|
|
|
|
part of the batch because it read an earlier revision of the control
|
|
|
|
record. Attempts to enforce business rules by transactions running at
|
|
|
|
this isolation level are not likely to work correctly without careful use
|
|
|
|
of explicit locks to block conflicting transactions.
|
2000-10-11 19:38:36 +02:00
|
|
|
</para>
|
2004-08-15 00:18:23 +02:00
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
<note>
|
|
|
|
<para>
|
|
|
|
Prior to <productname>PostgreSQL</productname> version 9.1, a request
|
|
|
|
for the Serializable transaction isolation level provided exactly the
|
|
|
|
same behavior described here. To retain the legacy Serializable
|
|
|
|
behavior, Repeatable Read should now be requested.
|
|
|
|
</para>
|
|
|
|
</note>
|
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="xact-serializable">
|
|
|
|
<title>Serializable Isolation Level</title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>transaction isolation level</primary>
|
|
|
|
<secondary>serializable</secondary>
|
|
|
|
</indexterm>
|
2004-08-15 00:18:23 +02:00
|
|
|
|
|
|
|
<indexterm>
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
<primary>serializable</primary>
|
2004-08-15 00:18:23 +02:00
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>predicate locking</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
<indexterm>
|
|
|
|
<primary>serialization anomaly</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2004-08-15 00:18:23 +02:00
|
|
|
<para>
|
2013-01-25 03:44:54 +01:00
|
|
|
The <firstterm>Serializable</firstterm> isolation level provides
|
|
|
|
the strictest transaction isolation. This level emulates serial
|
|
|
|
transaction execution for all committed transactions;
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
as if transactions had been executed one after another, serially,
|
|
|
|
rather than concurrently. However, like the Repeatable Read level,
|
|
|
|
applications using this level must
|
|
|
|
be prepared to retry transactions due to serialization failures.
|
|
|
|
In fact, this isolation level works exactly the same as Repeatable
|
|
|
|
Read except that it monitors for conditions which could make
|
|
|
|
execution of a concurrent set of serializable transactions behave
|
|
|
|
in a manner inconsistent with all possible serial (one at a time)
|
|
|
|
executions of those transactions. This monitoring does not
|
|
|
|
introduce any blocking beyond that present in repeatable read, but
|
|
|
|
there is some overhead to the monitoring, and detection of the
|
|
|
|
conditions which could cause a
|
|
|
|
<firstterm>serialization anomaly</firstterm> will trigger a
|
|
|
|
<firstterm>serialization failure</firstterm>.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
As an example,
|
2017-10-09 03:44:17 +02:00
|
|
|
consider a table <structname>mytab</structname>, initially containing:
|
2004-08-15 00:18:23 +02:00
|
|
|
<screen>
|
2011-12-17 22:41:16 +01:00
|
|
|
class | value
|
2004-08-15 00:18:23 +02:00
|
|
|
-------+-------
|
|
|
|
1 | 10
|
|
|
|
1 | 20
|
|
|
|
2 | 100
|
|
|
|
2 | 200
|
|
|
|
</screen>
|
2009-04-27 18:27:36 +02:00
|
|
|
Suppose that serializable transaction A computes:
|
2004-08-15 00:18:23 +02:00
|
|
|
<screen>
|
|
|
|
SELECT SUM(value) FROM mytab WHERE class = 1;
|
|
|
|
</screen>
|
2017-10-09 03:44:17 +02:00
|
|
|
and then inserts the result (30) as the <structfield>value</structfield> in a
|
|
|
|
new row with <structfield>class</structfield><literal> = 2</literal>. Concurrently, serializable
|
2009-04-27 18:27:36 +02:00
|
|
|
transaction B computes:
|
2004-08-15 00:18:23 +02:00
|
|
|
<screen>
|
|
|
|
SELECT SUM(value) FROM mytab WHERE class = 2;
|
|
|
|
</screen>
|
|
|
|
and obtains the result 300, which it inserts in a new row with
|
2017-10-09 03:44:17 +02:00
|
|
|
<structfield>class</structfield><literal> = 1</literal>. Then both transactions try to commit.
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
If either transaction were running at the Repeatable Read isolation level,
|
|
|
|
both would be allowed to commit; but since there is no serial order of execution
|
|
|
|
consistent with the result, using Serializable transactions will allow one
|
2011-10-28 17:59:55 +02:00
|
|
|
transaction to commit and will roll the other back with this message:
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
|
|
|
|
<screen>
|
|
|
|
ERROR: could not serialize access due to read/write dependencies among transactions
|
|
|
|
</screen>
|
|
|
|
|
|
|
|
This is because if A had
|
2004-08-15 00:18:23 +02:00
|
|
|
executed before B, B would have computed the sum 330, not 300, and
|
|
|
|
similarly the other order would have resulted in a different sum
|
|
|
|
computed by A.
|
|
|
|
</para>
|
|
|
|
|
2013-01-25 03:44:54 +01:00
|
|
|
<para>
|
|
|
|
When relying on Serializable transactions to prevent anomalies, it is
|
|
|
|
important that any data read from a permanent user table not be
|
|
|
|
considered valid until the transaction which read it has successfully
|
|
|
|
committed. This is true even for read-only transactions, except that
|
|
|
|
data read within a <firstterm>deferrable</firstterm> read-only
|
|
|
|
transaction is known to be valid as soon as it is read, because such a
|
|
|
|
transaction waits until it can acquire a snapshot guaranteed to be free
|
|
|
|
from such problems before starting to read any data. In all other cases
|
|
|
|
applications must not depend on results read during a transaction that
|
|
|
|
later aborted; instead, they should retry the transaction until it
|
|
|
|
succeeds.
|
|
|
|
</para>
|
|
|
|
|
2004-08-15 00:18:23 +02:00
|
|
|
<para>
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
To guarantee true serializability <productname>PostgreSQL</productname>
|
2017-10-09 03:44:17 +02:00
|
|
|
uses <firstterm>predicate locking</firstterm>, which means that it keeps locks
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
which allow it to determine when a write would have had an impact on
|
|
|
|
the result of a previous read from a concurrent transaction, had it run
|
|
|
|
first. In <productname>PostgreSQL</productname> these locks do not
|
2017-10-09 03:44:17 +02:00
|
|
|
cause any blocking and therefore can <emphasis>not</emphasis> play any part in
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
causing a deadlock. They are used to identify and flag dependencies
|
2016-04-07 18:12:35 +02:00
|
|
|
among concurrent Serializable transactions which in certain combinations
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
can lead to serialization anomalies. In contrast, a Read Committed or
|
|
|
|
Repeatable Read transaction which wants to ensure data consistency may
|
|
|
|
need to take out a lock on an entire table, which could block other
|
|
|
|
users attempting to use that table, or it may use <literal>SELECT FOR
|
|
|
|
UPDATE</literal> or <literal>SELECT FOR SHARE</literal> which not only
|
|
|
|
can block other transactions but cause disk access.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Predicate locks in <productname>PostgreSQL</productname>, like in most
|
|
|
|
other database systems, are based on data actually accessed by a
|
|
|
|
transaction. These will show up in the
|
|
|
|
<link linkend="view-pg-locks"><structname>pg_locks</structname></link>
|
2017-10-09 03:44:17 +02:00
|
|
|
system view with a <literal>mode</literal> of <literal>SIReadLock</literal>. The
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
particular locks
|
|
|
|
acquired during execution of a query will depend on the plan used by
|
|
|
|
the query, and multiple finer-grained locks (e.g., tuple locks) may be
|
|
|
|
combined into fewer coarser-grained locks (e.g., page locks) during the
|
|
|
|
course of the transaction to prevent exhaustion of the memory used to
|
2017-10-09 03:44:17 +02:00
|
|
|
track the locks. A <literal>READ ONLY</literal> transaction may be able to
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
release its SIRead locks before completion, if it detects that no
|
|
|
|
conflicts can still occur which could lead to a serialization anomaly.
|
2017-10-09 03:44:17 +02:00
|
|
|
In fact, <literal>READ ONLY</literal> transactions will often be able to
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
establish that fact at startup and avoid taking any predicate locks.
|
2017-10-09 03:44:17 +02:00
|
|
|
If you explicitly request a <literal>SERIALIZABLE READ ONLY DEFERRABLE</literal>
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
transaction, it will block until it can establish this fact. (This is
|
2017-10-09 03:44:17 +02:00
|
|
|
the <emphasis>only</emphasis> case where Serializable transactions block but
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
Repeatable Read transactions don't.) On the other hand, SIRead locks
|
|
|
|
often need to be kept past transaction commit, until overlapping read
|
|
|
|
write transactions complete.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Consistent use of Serializable transactions can simplify development.
|
2016-04-07 18:12:35 +02:00
|
|
|
The guarantee that any set of successfully committed concurrent
|
|
|
|
Serializable transactions will have the same effect as if they were run
|
|
|
|
one at a time means that if you can demonstrate that a single transaction,
|
|
|
|
as written, will do the right thing when run by itself, you can have
|
|
|
|
confidence that it will do the right thing in any mix of Serializable
|
|
|
|
transactions, even without any information about what those other
|
|
|
|
transactions might do, or it will not successfully commit. It is
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
important that an environment which uses this technique have a
|
|
|
|
generalized way of handling serialization failures (which always return
|
|
|
|
with a SQLSTATE value of '40001'), because it will be very hard to
|
|
|
|
predict exactly which transactions might contribute to the read/write
|
|
|
|
dependencies and need to be rolled back to prevent serialization
|
2011-05-19 00:14:45 +02:00
|
|
|
anomalies. The monitoring of read/write dependencies has a cost, as does
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
the restart of transactions which are terminated with a serialization
|
|
|
|
failure, but balanced against the cost and blocking involved in use of
|
2017-10-09 03:44:17 +02:00
|
|
|
explicit locks and <literal>SELECT FOR UPDATE</literal> or <literal>SELECT FOR
|
|
|
|
SHARE</literal>, Serializable transactions are the best performance choice
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
for some environments.
|
2004-08-15 00:18:23 +02:00
|
|
|
</para>
|
|
|
|
|
2016-04-07 18:12:35 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
While <productname>PostgreSQL</productname>'s Serializable transaction isolation
|
2016-04-07 18:12:35 +02:00
|
|
|
level only allows concurrent transactions to commit if it can prove there
|
|
|
|
is a serial order of execution that would produce the same effect, it
|
|
|
|
doesn't always prevent errors from being raised that would not occur in
|
|
|
|
true serial execution. In particular, it is possible to see unique
|
|
|
|
constraint violations caused by conflicts with overlapping Serializable
|
|
|
|
transactions even after explicitly checking that the key isn't present
|
|
|
|
before attempting to insert it. This can be avoided by making sure
|
2017-10-09 03:44:17 +02:00
|
|
|
that <emphasis>all</emphasis> Serializable transactions that insert potentially
|
2016-04-07 18:12:35 +02:00
|
|
|
conflicting keys explicitly check if they can do so first. For example,
|
|
|
|
imagine an application that asks the user for a new key and then checks
|
|
|
|
that it doesn't exist already by trying to select it first, or generates
|
|
|
|
a new key by selecting the maximum existing key and adding one. If some
|
|
|
|
Serializable transactions insert new keys directly without following this
|
|
|
|
protocol, unique constraints violations might be reported even in cases
|
|
|
|
where they could not occur in a serial execution of the concurrent
|
|
|
|
transactions.
|
|
|
|
</para>
|
|
|
|
|
2004-08-15 00:18:23 +02:00
|
|
|
<para>
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
For optimal performance when relying on Serializable transactions for
|
|
|
|
concurrency control, these issues should be considered:
|
|
|
|
|
|
|
|
<itemizedlist>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Declare transactions as <literal>READ ONLY</literal> when possible.
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
Control the number of active connections, using a connection pool if
|
|
|
|
needed. This is always an important performance consideration, but
|
2011-03-28 03:35:15 +02:00
|
|
|
it can be particularly important in a busy system using Serializable
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
transactions.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
Don't put more into a single transaction than needed for integrity
|
|
|
|
purposes.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
Don't leave connections dangling <quote>idle in transaction</quote>
|
2016-03-16 16:30:45 +01:00
|
|
|
longer than necessary. The configuration parameter
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="guc-idle-in-transaction-session-timeout"/> may be used to
|
2016-03-16 16:30:45 +01:00
|
|
|
automatically disconnect lingering sessions.
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Eliminate explicit locks, <literal>SELECT FOR UPDATE</literal>, and
|
|
|
|
<literal>SELECT FOR SHARE</literal> where no longer needed due to the
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
protections automatically provided by Serializable transactions.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
2011-06-22 03:54:36 +02:00
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
When the system is forced to combine multiple page-level predicate
|
|
|
|
locks into a single relation-level predicate lock because the predicate
|
|
|
|
lock table is short of memory, an increase in the rate of serialization
|
|
|
|
failures may occur. You can avoid this by increasing
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="guc-max-pred-locks-per-transaction"/>,
|
|
|
|
<xref linkend="guc-max-pred-locks-per-relation"/>, and/or
|
|
|
|
<xref linkend="guc-max-pred-locks-per-page"/>.
|
2011-06-22 03:54:36 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
A sequential scan will always necessitate a relation-level predicate
|
|
|
|
lock. This can result in an increased rate of serialization failures.
|
|
|
|
It may be helpful to encourage the use of index scans by reducing
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="guc-random-page-cost"/> and/or increasing
|
|
|
|
<xref linkend="guc-cpu-tuple-cost"/>. Be sure to weigh any decrease
|
2011-06-22 03:54:36 +02:00
|
|
|
in transaction rollbacks and restarts against any overall change in
|
|
|
|
query execution time.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
</itemizedlist>
|
2004-08-15 00:18:23 +02:00
|
|
|
</para>
|
2002-11-11 21:14:04 +01:00
|
|
|
</sect2>
|
|
|
|
</sect1>
|
1999-05-26 19:27:39 +02:00
|
|
|
|
2002-05-30 22:45:18 +02:00
|
|
|
<sect1 id="explicit-locking">
|
|
|
|
<title>Explicit Locking</title>
|
1999-05-26 19:27:39 +02:00
|
|
|
|
2001-05-13 00:51:36 +02:00
|
|
|
<indexterm>
|
2003-08-31 19:32:24 +02:00
|
|
|
<primary>lock</primary>
|
2001-05-13 00:51:36 +02:00
|
|
|
</indexterm>
|
|
|
|
|
1999-05-26 19:27:39 +02:00
|
|
|
<para>
|
2002-03-27 03:36:51 +01:00
|
|
|
<productname>PostgreSQL</productname> provides various lock modes
|
2002-11-15 04:11:18 +01:00
|
|
|
to control concurrent access to data in tables. These modes can
|
|
|
|
be used for application-controlled locking in situations where
|
|
|
|
<acronym>MVCC</acronym> does not give the desired behavior. Also,
|
|
|
|
most <productname>PostgreSQL</productname> commands automatically
|
|
|
|
acquire locks of appropriate modes to ensure that referenced
|
|
|
|
tables are not dropped or modified in incompatible ways while the
|
2017-10-09 03:44:17 +02:00
|
|
|
command executes. (For example, <command>TRUNCATE</command> cannot safely be
|
2006-12-01 02:04:36 +01:00
|
|
|
executed concurrently with other operations on the same table, so it
|
|
|
|
obtains an exclusive lock on the table to enforce that.)
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
|
|
|
|
2003-10-18 00:38:20 +02:00
|
|
|
<para>
|
|
|
|
To examine a list of the currently outstanding locks in a database
|
2006-12-01 02:04:36 +01:00
|
|
|
server, use the
|
|
|
|
<link linkend="view-pg-locks"><structname>pg_locks</structname></link>
|
|
|
|
system view. For more information on monitoring the status of the lock
|
2017-11-23 15:39:47 +01:00
|
|
|
manager subsystem, refer to <xref linkend="monitoring"/>.
|
2003-10-18 00:38:20 +02:00
|
|
|
</para>
|
|
|
|
|
2002-05-30 22:45:18 +02:00
|
|
|
<sect2 id="locking-tables">
|
2011-02-01 23:00:26 +01:00
|
|
|
<title>Table-level Locks</title>
|
2002-05-30 22:45:18 +02:00
|
|
|
|
2003-08-31 19:32:24 +02:00
|
|
|
<indexterm zone="locking-tables">
|
|
|
|
<primary>LOCK</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2002-03-27 03:36:51 +01:00
|
|
|
<para>
|
|
|
|
The list below shows the available lock modes and the contexts in
|
2002-05-30 22:45:18 +02:00
|
|
|
which they are used automatically by
|
2004-08-15 00:18:23 +02:00
|
|
|
<productname>PostgreSQL</productname>. You can also acquire any
|
2005-06-13 04:40:08 +02:00
|
|
|
of these locks explicitly with the command <xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="sql-lock"/>.
|
2002-05-30 22:45:18 +02:00
|
|
|
Remember that all of these lock modes are table-level locks,
|
|
|
|
even if the name contains the word
|
2003-03-13 02:30:29 +01:00
|
|
|
<quote>row</quote>; the names of the lock modes are historical.
|
2002-05-30 22:45:18 +02:00
|
|
|
To some extent the names reflect the typical usage of each lock
|
2004-11-15 07:32:15 +01:00
|
|
|
mode — but the semantics are all the same. The only real difference
|
2002-05-30 22:45:18 +02:00
|
|
|
between one lock mode and another is the set of lock modes with
|
2017-11-23 15:39:47 +01:00
|
|
|
which each conflicts (see <xref linkend="table-lock-compatibility"/>).
|
2010-05-03 17:35:30 +02:00
|
|
|
Two transactions cannot hold locks of conflicting
|
2002-05-30 22:45:18 +02:00
|
|
|
modes on the same table at the same time. (However, a transaction
|
Update documentation on may/can/might:
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
Also update two error messages mentioned in the documenation to match.
2007-01-31 21:56:20 +01:00
|
|
|
never conflicts with itself. For example, it might acquire
|
2002-05-30 22:45:18 +02:00
|
|
|
<literal>ACCESS EXCLUSIVE</literal> lock and later acquire
|
2002-08-17 15:04:19 +02:00
|
|
|
<literal>ACCESS SHARE</literal> lock on the same table.) Non-conflicting
|
Update documentation on may/can/might:
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
Also update two error messages mentioned in the documenation to match.
2007-01-31 21:56:20 +01:00
|
|
|
lock modes can be held concurrently by many transactions. Notice in
|
2002-05-30 22:45:18 +02:00
|
|
|
particular that some lock modes are self-conflicting (for example,
|
2003-03-13 02:30:29 +01:00
|
|
|
an <literal>ACCESS EXCLUSIVE</literal> lock cannot be held by more than one
|
2002-05-30 22:45:18 +02:00
|
|
|
transaction at a time) while others are not self-conflicting (for example,
|
2003-03-13 02:30:29 +01:00
|
|
|
an <literal>ACCESS SHARE</literal> lock can be held by multiple transactions).
|
2002-03-27 03:36:51 +01:00
|
|
|
</para>
|
2002-08-17 15:04:19 +02:00
|
|
|
|
1999-05-26 19:27:39 +02:00
|
|
|
<variablelist>
|
2011-01-29 19:00:18 +01:00
|
|
|
<title>Table-level Lock Modes</title>
|
1999-05-26 19:27:39 +02:00
|
|
|
<varlistentry>
|
|
|
|
<term>
|
2006-09-18 00:50:31 +02:00
|
|
|
<literal>ACCESS SHARE</literal>
|
1999-05-26 19:27:39 +02:00
|
|
|
</term>
|
|
|
|
<listitem>
|
2006-09-18 00:50:31 +02:00
|
|
|
<para>
|
|
|
|
Conflicts with the <literal>ACCESS EXCLUSIVE</literal> lock
|
|
|
|
mode only.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The <command>SELECT</command> command acquires a lock of this mode on
|
2017-10-09 03:44:17 +02:00
|
|
|
referenced tables. In general, any query that only <emphasis>reads</emphasis> a table
|
2006-09-18 00:50:31 +02:00
|
|
|
and does not modify it will acquire this lock mode.
|
|
|
|
</para>
|
1999-05-26 19:27:39 +02:00
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
2006-09-18 00:50:31 +02:00
|
|
|
<literal>ROW SHARE</literal>
|
1999-05-26 19:27:39 +02:00
|
|
|
</term>
|
|
|
|
<listitem>
|
2006-09-18 00:50:31 +02:00
|
|
|
<para>
|
|
|
|
Conflicts with the <literal>EXCLUSIVE</literal> and
|
|
|
|
<literal>ACCESS EXCLUSIVE</literal> lock modes.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The <command>SELECT FOR UPDATE</command> and
|
|
|
|
<command>SELECT FOR SHARE</command> commands acquire a
|
|
|
|
lock of this mode on the target table(s) (in addition to
|
|
|
|
<literal>ACCESS SHARE</literal> locks on any other tables
|
|
|
|
that are referenced but not selected
|
|
|
|
<option>FOR UPDATE/FOR SHARE</option>).
|
|
|
|
</para>
|
1999-05-26 19:27:39 +02:00
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
2006-09-18 00:50:31 +02:00
|
|
|
<literal>ROW EXCLUSIVE</literal>
|
1999-05-26 19:27:39 +02:00
|
|
|
</term>
|
|
|
|
<listitem>
|
2006-09-18 00:50:31 +02:00
|
|
|
<para>
|
|
|
|
Conflicts with the <literal>SHARE</literal>, <literal>SHARE ROW
|
|
|
|
EXCLUSIVE</literal>, <literal>EXCLUSIVE</literal>, and
|
|
|
|
<literal>ACCESS EXCLUSIVE</literal> lock modes.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The commands <command>UPDATE</command>,
|
2018-04-12 12:22:56 +02:00
|
|
|
<command>DELETE</command>, and <command>INSERT</command>
|
2006-09-18 00:50:31 +02:00
|
|
|
acquire this lock mode on the target table (in addition to
|
|
|
|
<literal>ACCESS SHARE</literal> locks on any other referenced
|
|
|
|
tables). In general, this lock mode will be acquired by any
|
2017-10-09 03:44:17 +02:00
|
|
|
command that <emphasis>modifies data</emphasis> in a table.
|
2006-09-18 00:50:31 +02:00
|
|
|
</para>
|
1999-05-26 19:27:39 +02:00
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
2001-07-10 00:18:34 +02:00
|
|
|
<varlistentry>
|
|
|
|
<term>
|
2006-09-18 00:50:31 +02:00
|
|
|
<literal>SHARE UPDATE EXCLUSIVE</literal>
|
2001-07-10 00:18:34 +02:00
|
|
|
</term>
|
|
|
|
<listitem>
|
2006-09-18 00:50:31 +02:00
|
|
|
<para>
|
|
|
|
Conflicts with the <literal>SHARE UPDATE EXCLUSIVE</literal>,
|
|
|
|
<literal>SHARE</literal>, <literal>SHARE ROW
|
|
|
|
EXCLUSIVE</literal>, <literal>EXCLUSIVE</literal>, and
|
|
|
|
<literal>ACCESS EXCLUSIVE</literal> lock modes.
|
|
|
|
This mode protects a table against
|
2017-10-09 03:44:17 +02:00
|
|
|
concurrent schema changes and <command>VACUUM</command> runs.
|
2006-09-18 00:50:31 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Acquired by <command>VACUUM</command> (without <option>FULL</option>),
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>ANALYZE</command>, <command>CREATE INDEX CONCURRENTLY</command>,
|
|
|
|
<command>CREATE STATISTICS</command> and
|
2014-04-06 17:13:43 +02:00
|
|
|
<command>ALTER TABLE VALIDATE</command> and other
|
|
|
|
<command>ALTER TABLE</command> variants (for full details see
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="sql-altertable"/>).
|
2006-09-18 00:50:31 +02:00
|
|
|
</para>
|
2001-07-10 00:18:34 +02:00
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
1999-05-26 19:27:39 +02:00
|
|
|
<varlistentry>
|
|
|
|
<term>
|
2006-09-18 00:50:31 +02:00
|
|
|
<literal>SHARE</literal>
|
1999-05-26 19:27:39 +02:00
|
|
|
</term>
|
|
|
|
<listitem>
|
2006-09-18 00:50:31 +02:00
|
|
|
<para>
|
2009-06-17 23:58:49 +02:00
|
|
|
Conflicts with the <literal>ROW EXCLUSIVE</literal>,
|
|
|
|
<literal>SHARE UPDATE EXCLUSIVE</literal>, <literal>SHARE ROW
|
|
|
|
EXCLUSIVE</literal>, <literal>EXCLUSIVE</literal>, and
|
|
|
|
<literal>ACCESS EXCLUSIVE</literal> lock modes.
|
2006-09-18 00:50:31 +02:00
|
|
|
This mode protects a table against concurrent data changes.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Acquired by <command>CREATE INDEX</command>
|
|
|
|
(without <option>CONCURRENTLY</option>).
|
|
|
|
</para>
|
1999-05-26 19:27:39 +02:00
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
2006-09-18 00:50:31 +02:00
|
|
|
<literal>SHARE ROW EXCLUSIVE</literal>
|
1999-05-26 19:27:39 +02:00
|
|
|
</term>
|
|
|
|
<listitem>
|
2006-09-18 00:50:31 +02:00
|
|
|
<para>
|
2009-06-17 23:58:49 +02:00
|
|
|
Conflicts with the <literal>ROW EXCLUSIVE</literal>,
|
|
|
|
<literal>SHARE UPDATE EXCLUSIVE</literal>,
|
|
|
|
<literal>SHARE</literal>, <literal>SHARE ROW
|
|
|
|
EXCLUSIVE</literal>, <literal>EXCLUSIVE</literal>, and
|
|
|
|
<literal>ACCESS EXCLUSIVE</literal> lock modes.
|
2011-01-26 00:39:01 +01:00
|
|
|
This mode protects a table against concurrent data changes, and
|
|
|
|
is self-exclusive so that only one session can hold it at a time.
|
2006-09-18 00:50:31 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-03-23 20:25:34 +01:00
|
|
|
Acquired by <command>CREATE COLLATION</command>,
|
|
|
|
<command>CREATE TRIGGER</command>, and many forms of
|
2017-11-23 15:39:47 +01:00
|
|
|
<command>ALTER TABLE</command> (see <xref linkend="sql-altertable"/>).
|
2015-04-05 18:35:24 +02:00
|
|
|
</para>
|
1999-05-26 19:27:39 +02:00
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
2006-09-18 00:50:31 +02:00
|
|
|
<literal>EXCLUSIVE</literal>
|
1999-05-26 19:27:39 +02:00
|
|
|
</term>
|
|
|
|
<listitem>
|
2006-09-18 00:50:31 +02:00
|
|
|
<para>
|
2009-06-17 23:58:49 +02:00
|
|
|
Conflicts with the <literal>ROW SHARE</literal>, <literal>ROW
|
|
|
|
EXCLUSIVE</literal>, <literal>SHARE UPDATE
|
|
|
|
EXCLUSIVE</literal>, <literal>SHARE</literal>, <literal>SHARE
|
|
|
|
ROW EXCLUSIVE</literal>, <literal>EXCLUSIVE</literal>, and
|
|
|
|
<literal>ACCESS EXCLUSIVE</literal> lock modes.
|
2006-09-18 00:50:31 +02:00
|
|
|
This mode allows only concurrent <literal>ACCESS SHARE</literal> locks,
|
|
|
|
i.e., only reads from the table can proceed in parallel with a
|
|
|
|
transaction holding this lock mode.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2013-07-16 19:55:44 +02:00
|
|
|
Acquired by <command>REFRESH MATERIALIZED VIEW CONCURRENTLY</command>.
|
2006-09-18 00:50:31 +02:00
|
|
|
</para>
|
1999-05-26 19:27:39 +02:00
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
2006-09-18 00:50:31 +02:00
|
|
|
<literal>ACCESS EXCLUSIVE</literal>
|
1999-05-26 19:27:39 +02:00
|
|
|
</term>
|
|
|
|
<listitem>
|
2006-09-18 00:50:31 +02:00
|
|
|
<para>
|
2009-06-17 23:58:49 +02:00
|
|
|
Conflicts with locks of all modes (<literal>ACCESS
|
|
|
|
SHARE</literal>, <literal>ROW SHARE</literal>, <literal>ROW
|
|
|
|
EXCLUSIVE</literal>, <literal>SHARE UPDATE
|
|
|
|
EXCLUSIVE</literal>, <literal>SHARE</literal>, <literal>SHARE
|
|
|
|
ROW EXCLUSIVE</literal>, <literal>EXCLUSIVE</literal>, and
|
|
|
|
<literal>ACCESS EXCLUSIVE</literal>).
|
2006-09-18 00:50:31 +02:00
|
|
|
This mode guarantees that the
|
|
|
|
holder is the only transaction accessing the table in any way.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Acquired by the <command>DROP TABLE</command>,
|
2010-07-28 07:22:24 +02:00
|
|
|
<command>TRUNCATE</command>, <command>REINDEX</command>,
|
2015-11-02 13:23:10 +01:00
|
|
|
<command>CLUSTER</command>, <command>VACUUM FULL</command>,
|
|
|
|
and <command>REFRESH MATERIALIZED VIEW</command> (without
|
|
|
|
<option>CONCURRENTLY</option>)
|
2017-10-09 03:44:17 +02:00
|
|
|
commands. Many forms of <command>ALTER TABLE</command> also acquire
|
2015-04-05 17:37:08 +02:00
|
|
|
a lock at this level. This is also the default lock mode for
|
|
|
|
<command>LOCK TABLE</command> statements that do not specify
|
|
|
|
a mode explicitly.
|
2006-09-18 00:50:31 +02:00
|
|
|
</para>
|
1999-05-26 19:27:39 +02:00
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
</variablelist>
|
2000-10-11 20:29:52 +02:00
|
|
|
|
2003-03-13 02:30:29 +01:00
|
|
|
<tip>
|
2000-10-11 20:29:52 +02:00
|
|
|
<para>
|
2002-03-27 03:36:51 +01:00
|
|
|
Only an <literal>ACCESS EXCLUSIVE</literal> lock blocks a
|
2005-04-28 23:47:18 +02:00
|
|
|
<command>SELECT</command> (without <option>FOR UPDATE/SHARE</option>)
|
2002-03-27 03:36:51 +01:00
|
|
|
statement.
|
2000-10-11 20:29:52 +02:00
|
|
|
</para>
|
2003-03-13 02:30:29 +01:00
|
|
|
</tip>
|
1999-05-26 19:27:39 +02:00
|
|
|
|
2006-12-01 02:04:36 +01:00
|
|
|
<para>
|
|
|
|
Once acquired, a lock is normally held till end of transaction. But if a
|
|
|
|
lock is acquired after establishing a savepoint, the lock is released
|
2009-06-17 23:58:49 +02:00
|
|
|
immediately if the savepoint is rolled back to. This is consistent with
|
2017-10-09 03:44:17 +02:00
|
|
|
the principle that <command>ROLLBACK</command> cancels all effects of the
|
2006-12-01 02:04:36 +01:00
|
|
|
commands since the savepoint. The same holds for locks acquired within a
|
2017-10-09 03:44:17 +02:00
|
|
|
<application>PL/pgSQL</application> exception block: an error escape from the block
|
2006-12-01 02:04:36 +01:00
|
|
|
releases locks acquired within it.
|
|
|
|
</para>
|
|
|
|
|
2007-02-18 02:21:49 +01:00
|
|
|
|
|
|
|
|
2007-02-08 16:32:11 +01:00
|
|
|
<table tocentry="1" id="table-lock-compatibility">
|
2011-01-29 19:00:18 +01:00
|
|
|
<title> Conflicting Lock Modes</title>
|
2007-02-08 16:32:11 +01:00
|
|
|
<tgroup cols="9">
|
2017-11-23 15:39:47 +01:00
|
|
|
<colspec colnum="2" colname="lockst"/>
|
|
|
|
<colspec colnum="9" colname="lockend"/>
|
|
|
|
<spanspec namest="lockst" nameend="lockend" spanname="lockreq"/>
|
2007-02-08 16:32:11 +01:00
|
|
|
<thead>
|
2007-02-16 04:50:29 +01:00
|
|
|
<row>
|
2007-02-18 02:21:49 +01:00
|
|
|
<entry morerows="1">Requested Lock Mode</entry>
|
|
|
|
<entry spanname="lockreq">Current Lock Mode</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry>ACCESS SHARE</entry>
|
|
|
|
<entry>ROW SHARE</entry>
|
|
|
|
<entry>ROW EXCLUSIVE</entry>
|
|
|
|
<entry>SHARE UPDATE EXCLUSIVE</entry>
|
|
|
|
<entry>SHARE</entry>
|
|
|
|
<entry>SHARE ROW EXCLUSIVE</entry>
|
|
|
|
<entry>EXCLUSIVE</entry>
|
|
|
|
<entry>ACCESS EXCLUSIVE</entry>
|
2007-02-16 04:50:29 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
<tbody>
|
|
|
|
<row>
|
2007-02-18 02:21:49 +01:00
|
|
|
<entry>ACCESS SHARE</entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center">X</entry>
|
2007-02-16 04:50:29 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2007-02-18 02:21:49 +01:00
|
|
|
<entry>ROW SHARE</entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
2007-02-16 04:50:29 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2007-02-18 02:21:49 +01:00
|
|
|
<entry>ROW EXCLUSIVE</entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
2007-02-16 04:50:29 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2007-02-18 02:21:49 +01:00
|
|
|
<entry>SHARE UPDATE EXCLUSIVE</entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
2007-02-16 04:50:29 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2007-02-18 02:21:49 +01:00
|
|
|
<entry>SHARE</entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
2007-02-16 04:50:29 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2007-02-18 02:21:49 +01:00
|
|
|
<entry>SHARE ROW EXCLUSIVE</entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
2007-02-16 04:50:29 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2007-02-18 02:21:49 +01:00
|
|
|
<entry>EXCLUSIVE</entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
2007-02-16 04:50:29 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2007-02-18 02:21:49 +01:00
|
|
|
<entry>ACCESS EXCLUSIVE</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
2007-02-16 04:50:29 +01:00
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
2002-05-30 22:45:18 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="locking-rows">
|
2011-02-01 23:00:26 +01:00
|
|
|
<title>Row-level Locks</title>
|
2002-05-30 22:45:18 +02:00
|
|
|
|
1999-05-26 19:27:39 +02:00
|
|
|
<para>
|
2005-04-28 23:47:18 +02:00
|
|
|
In addition to table-level locks, there are row-level locks, which
|
2014-11-13 18:45:55 +01:00
|
|
|
are listed as below with the contexts in which they are used
|
|
|
|
automatically by <productname>PostgreSQL</productname>. See
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="row-lock-compatibility"/> for a complete table of
|
2014-11-13 18:45:55 +01:00
|
|
|
row-level lock conflicts. Note that a transaction can hold
|
|
|
|
conflicting locks on the same row, even in different subtransactions;
|
|
|
|
but other than that, two transactions can never hold conflicting locks
|
|
|
|
on the same row. Row-level locks do not affect data querying; they
|
|
|
|
block only <emphasis>writers and lockers</emphasis> to the same row.
|
2005-04-28 23:47:18 +02:00
|
|
|
</para>
|
|
|
|
|
2014-11-13 18:45:55 +01:00
|
|
|
<variablelist>
|
|
|
|
<title>Row-level Lock Modes</title>
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
|
|
|
<literal>FOR UPDATE</literal>
|
|
|
|
</term>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>FOR UPDATE</literal> causes the rows retrieved by the
|
|
|
|
<command>SELECT</command> statement to be locked as though for
|
|
|
|
update. This prevents them from being locked, modified or deleted by
|
|
|
|
other transactions until the current transaction ends. That is,
|
|
|
|
other transactions that attempt <command>UPDATE</command>,
|
|
|
|
<command>DELETE</command>,
|
|
|
|
<command>SELECT FOR UPDATE</command>,
|
|
|
|
<command>SELECT FOR NO KEY UPDATE</command>,
|
|
|
|
<command>SELECT FOR SHARE</command> or
|
|
|
|
<command>SELECT FOR KEY SHARE</command>
|
|
|
|
of these rows will be blocked until the current transaction ends;
|
|
|
|
conversely, <command>SELECT FOR UPDATE</command> will wait for a
|
|
|
|
concurrent transaction that has run any of those commands on the
|
|
|
|
same row,
|
|
|
|
and will then lock and return the updated row (or no row, if the
|
2017-10-09 03:44:17 +02:00
|
|
|
row was deleted). Within a <literal>REPEATABLE READ</literal> or
|
|
|
|
<literal>SERIALIZABLE</literal> transaction,
|
2014-11-13 18:45:55 +01:00
|
|
|
however, an error will be thrown if a row to be locked has changed
|
|
|
|
since the transaction started. For further discussion see
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="applevel-consistency"/>.
|
2014-11-13 18:45:55 +01:00
|
|
|
</para>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The <literal>FOR UPDATE</literal> lock mode
|
|
|
|
is also acquired by any <command>DELETE</command> on a row, and also by an
|
|
|
|
<command>UPDATE</command> that modifies the values on certain columns. Currently,
|
|
|
|
the set of columns considered for the <command>UPDATE</command> case are those that
|
2014-11-13 18:45:55 +01:00
|
|
|
have a unique index on them that can be used in a foreign key (so partial
|
|
|
|
indexes and expressional indexes are not considered), but this may change
|
|
|
|
in the future.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
1999-05-26 19:27:39 +02:00
|
|
|
|
2014-11-13 18:45:55 +01:00
|
|
|
<varlistentry>
|
|
|
|
<term>
|
|
|
|
<literal>FOR NO KEY UPDATE</literal>
|
|
|
|
</term>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Behaves similarly to <literal>FOR UPDATE</literal>, except that the lock
|
2014-11-13 18:45:55 +01:00
|
|
|
acquired is weaker: this lock will not block
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>SELECT FOR KEY SHARE</literal> commands that attempt to acquire
|
2014-11-13 18:45:55 +01:00
|
|
|
a lock on the same rows. This lock mode is also acquired by any
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>UPDATE</command> that does not acquire a <literal>FOR UPDATE</literal> lock.
|
2014-11-13 18:45:55 +01:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
|
|
|
<literal>FOR SHARE</literal>
|
|
|
|
</term>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Behaves similarly to <literal>FOR NO KEY UPDATE</literal>, except that it
|
2014-11-13 18:45:55 +01:00
|
|
|
acquires a shared lock rather than exclusive lock on each retrieved
|
|
|
|
row. A shared lock blocks other transactions from performing
|
|
|
|
<command>UPDATE</command>, <command>DELETE</command>,
|
|
|
|
<command>SELECT FOR UPDATE</command> or
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>SELECT FOR NO KEY UPDATE</command> on these rows, but it does not
|
2014-11-13 18:45:55 +01:00
|
|
|
prevent them from performing <command>SELECT FOR SHARE</command> or
|
|
|
|
<command>SELECT FOR KEY SHARE</command>.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
|
|
|
<literal>FOR KEY SHARE</literal>
|
|
|
|
</term>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
Behaves similarly to <literal>FOR SHARE</literal>, except that the
|
2017-10-09 03:44:17 +02:00
|
|
|
lock is weaker: <literal>SELECT FOR UPDATE</literal> is blocked, but not
|
|
|
|
<literal>SELECT FOR NO KEY UPDATE</literal>. A key-shared lock blocks
|
2014-11-13 18:45:55 +01:00
|
|
|
other transactions from performing <command>DELETE</command> or
|
|
|
|
any <command>UPDATE</command> that changes the key values, but not
|
2017-10-09 03:44:17 +02:00
|
|
|
other <command>UPDATE</command>, and neither does it prevent
|
|
|
|
<command>SELECT FOR NO KEY UPDATE</command>, <command>SELECT FOR SHARE</command>,
|
|
|
|
or <command>SELECT FOR KEY SHARE</command>.
|
2014-11-13 18:45:55 +01:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
</variablelist>
|
2005-04-28 23:47:18 +02:00
|
|
|
|
1999-05-26 19:27:39 +02:00
|
|
|
<para>
|
2002-03-27 03:36:51 +01:00
|
|
|
<productname>PostgreSQL</productname> doesn't remember any
|
2009-04-27 18:27:36 +02:00
|
|
|
information about modified rows in memory, so there is no limit on
|
2002-03-27 03:36:51 +01:00
|
|
|
the number of rows locked at one time. However, locking a row
|
2009-04-27 18:27:36 +02:00
|
|
|
might cause a disk write, e.g., <command>SELECT FOR
|
|
|
|
UPDATE</command> modifies selected rows to mark them locked, and so
|
2002-03-27 03:36:51 +01:00
|
|
|
will result in disk writes.
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
2014-11-13 18:45:55 +01:00
|
|
|
|
|
|
|
<table tocentry="1" id="row-lock-compatibility">
|
|
|
|
<title>Conflicting Row-level Locks</title>
|
|
|
|
<tgroup cols="5">
|
2017-11-23 15:39:47 +01:00
|
|
|
<colspec colnum="2" colname="lockst"/>
|
|
|
|
<colspec colnum="5" colname="lockend"/>
|
|
|
|
<spanspec namest="lockst" nameend="lockend" spanname="lockreq"/>
|
2014-11-13 18:45:55 +01:00
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry morerows="1">Requested Lock Mode</entry>
|
|
|
|
<entry spanname="lockreq">Current Lock Mode</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry>FOR KEY SHARE</entry>
|
|
|
|
<entry>FOR SHARE</entry>
|
|
|
|
<entry>FOR NO KEY UPDATE</entry>
|
|
|
|
<entry>FOR UPDATE</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry>FOR KEY SHARE</entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry>FOR SHARE</entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry>FOR NO KEY UPDATE</entry>
|
|
|
|
<entry align="center"></entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry>FOR UPDATE</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
<entry align="center">X</entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
2014-07-04 04:24:59 +02:00
|
|
|
</sect2>
|
1999-05-26 19:27:39 +02:00
|
|
|
|
2014-07-04 04:24:59 +02:00
|
|
|
<sect2 id="locking-pages">
|
|
|
|
<title>Page-level Locks</title>
|
2014-11-13 18:45:55 +01:00
|
|
|
|
1999-05-26 19:27:39 +02:00
|
|
|
<para>
|
2002-05-30 22:45:18 +02:00
|
|
|
In addition to table and row locks, page-level share/exclusive locks are
|
|
|
|
used to control read/write access to table pages in the shared buffer
|
2003-03-13 02:30:29 +01:00
|
|
|
pool. These locks are released immediately after a row is fetched or
|
2002-12-18 21:40:24 +01:00
|
|
|
updated. Application developers normally need not be concerned with
|
2009-06-17 23:58:49 +02:00
|
|
|
page-level locks, but they are mentioned here for completeness.
|
2002-05-30 22:45:18 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="locking-deadlocks">
|
|
|
|
<title>Deadlocks</title>
|
|
|
|
|
2003-08-31 19:32:24 +02:00
|
|
|
<indexterm zone="locking-deadlocks">
|
|
|
|
<primary>deadlock</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2002-05-30 22:45:18 +02:00
|
|
|
<para>
|
2003-09-20 22:12:05 +02:00
|
|
|
The use of explicit locking can increase the likelihood of
|
2017-10-09 03:44:17 +02:00
|
|
|
<firstterm>deadlocks</firstterm>, wherein two (or more) transactions each
|
2002-12-18 21:40:24 +01:00
|
|
|
hold locks that the other wants. For example, if transaction 1
|
|
|
|
acquires an exclusive lock on table A and then tries to acquire
|
|
|
|
an exclusive lock on table B, while transaction 2 has already
|
|
|
|
exclusive-locked table B and now wants an exclusive lock on table
|
|
|
|
A, then neither one can proceed.
|
|
|
|
<productname>PostgreSQL</productname> automatically detects
|
|
|
|
deadlock situations and resolves them by aborting one of the
|
|
|
|
transactions involved, allowing the other(s) to complete.
|
|
|
|
(Exactly which transaction will be aborted is difficult to
|
2009-04-27 18:27:36 +02:00
|
|
|
predict and should not be relied upon.)
|
2002-05-30 22:45:18 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2002-12-18 21:40:24 +01:00
|
|
|
Note that deadlocks can also occur as the result of row-level
|
|
|
|
locks (and thus, they can occur even if explicit locking is not
|
2009-04-27 18:27:36 +02:00
|
|
|
used). Consider the case in which two concurrent
|
|
|
|
transactions modify a table. The first transaction executes:
|
2002-12-18 21:40:24 +01:00
|
|
|
|
|
|
|
<screen>
|
|
|
|
UPDATE accounts SET balance = balance + 100.00 WHERE acctnum = 11111;
|
|
|
|
</screen>
|
|
|
|
|
|
|
|
This acquires a row-level lock on the row with the specified
|
|
|
|
account number. Then, the second transaction executes:
|
|
|
|
|
|
|
|
<screen>
|
|
|
|
UPDATE accounts SET balance = balance + 100.00 WHERE acctnum = 22222;
|
|
|
|
UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 11111;
|
|
|
|
</screen>
|
|
|
|
|
|
|
|
The first <command>UPDATE</command> statement successfully
|
|
|
|
acquires a row-level lock on the specified row, so it succeeds in
|
|
|
|
updating that row. However, the second <command>UPDATE</command>
|
|
|
|
statement finds that the row it is attempting to update has
|
|
|
|
already been locked, so it waits for the transaction that
|
|
|
|
acquired the lock to complete. Transaction two is now waiting on
|
|
|
|
transaction one to complete before it continues execution. Now,
|
|
|
|
transaction one executes:
|
|
|
|
|
|
|
|
<screen>
|
|
|
|
UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222;
|
|
|
|
</screen>
|
|
|
|
|
|
|
|
Transaction one attempts to acquire a row-level lock on the
|
|
|
|
specified row, but it cannot: transaction two already holds such
|
|
|
|
a lock. So it waits for transaction two to complete. Thus,
|
|
|
|
transaction one is blocked on transaction two, and transaction
|
|
|
|
two is blocked on transaction one: a deadlock
|
|
|
|
condition. <productname>PostgreSQL</productname> will detect this
|
|
|
|
situation and abort one of the transactions.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The best defense against deadlocks is generally to avoid them by
|
|
|
|
being certain that all applications using a database acquire
|
2004-08-15 00:18:23 +02:00
|
|
|
locks on multiple objects in a consistent order. In the example
|
|
|
|
above, if both transactions
|
2002-12-18 21:40:24 +01:00
|
|
|
had updated the rows in the same order, no deadlock would have
|
|
|
|
occurred. One should also ensure that the first lock acquired on
|
2009-04-27 18:27:36 +02:00
|
|
|
an object in a transaction is the most restrictive mode that will be
|
2002-12-18 21:40:24 +01:00
|
|
|
needed for that object. If it is not feasible to verify this in
|
Update documentation on may/can/might:
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
Also update two error messages mentioned in the documenation to match.
2007-01-31 21:56:20 +01:00
|
|
|
advance, then deadlocks can be handled on-the-fly by retrying
|
2009-04-27 18:27:36 +02:00
|
|
|
transactions that abort due to deadlocks.
|
2002-05-30 22:45:18 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
So long as no deadlock situation is detected, a transaction seeking
|
|
|
|
either a table-level or row-level lock will wait indefinitely for
|
|
|
|
conflicting locks to be released. This means it is a bad idea for
|
|
|
|
applications to hold transactions open for long periods of time
|
|
|
|
(e.g., while waiting for user input).
|
|
|
|
</para>
|
|
|
|
</sect2>
|
2006-09-21 01:43:22 +02:00
|
|
|
|
|
|
|
<sect2 id="advisory-locks">
|
|
|
|
<title>Advisory Locks</title>
|
|
|
|
|
Overdue code review for transaction-level advisory locks patch.
Commit 62c7bd31c8878dd45c9b9b2429ab7a12103f3590 had assorted problems, most
visibly that it broke PREPARE TRANSACTION in the presence of session-level
advisory locks (which should be ignored by PREPARE), as per a recent
complaint from Stephen Rees. More abstractly, the patch made the
LockMethodData.transactional flag not merely useless but outright
dangerous, because in point of fact that flag no longer tells you anything
at all about whether a lock is held transactionally. This fix therefore
removes that flag altogether. We now rely entirely on the convention
already in use in lock.c that transactional lock holds must be owned by
some ResourceOwner, while session holds are never so owned. Setting the
locallock struct's owner link to NULL thus denotes a session hold, and
there is no redundant marker for that.
PREPARE TRANSACTION now works again when there are session-level advisory
locks, and it is also able to transfer transactional advisory locks to the
prepared transaction, but for implementation reasons it throws an error if
we hold both types of lock on a single lockable object. Perhaps it will be
worth improving that someday.
Assorted other minor cleanup and documentation editing, as well.
Back-patch to 9.1, except that in the 9.1 branch I did not remove the
LockMethodData.transactional flag for fear of causing an ABI break for
any external code that might be examining those structs.
2012-05-04 23:43:27 +02:00
|
|
|
<indexterm zone="advisory-locks">
|
|
|
|
<primary>advisory lock</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2006-09-21 01:43:22 +02:00
|
|
|
<indexterm zone="advisory-locks">
|
|
|
|
<primary>lock</primary>
|
|
|
|
<secondary>advisory</secondary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
<productname>PostgreSQL</productname> provides a means for
|
|
|
|
creating locks that have application-defined meanings. These are
|
2017-10-09 03:44:17 +02:00
|
|
|
called <firstterm>advisory locks</firstterm>, because the system does not
|
2006-09-21 01:43:22 +02:00
|
|
|
enforce their use — it is up to the application to use them
|
|
|
|
correctly. Advisory locks can be useful for locking strategies
|
Overdue code review for transaction-level advisory locks patch.
Commit 62c7bd31c8878dd45c9b9b2429ab7a12103f3590 had assorted problems, most
visibly that it broke PREPARE TRANSACTION in the presence of session-level
advisory locks (which should be ignored by PREPARE), as per a recent
complaint from Stephen Rees. More abstractly, the patch made the
LockMethodData.transactional flag not merely useless but outright
dangerous, because in point of fact that flag no longer tells you anything
at all about whether a lock is held transactionally. This fix therefore
removes that flag altogether. We now rely entirely on the convention
already in use in lock.c that transactional lock holds must be owned by
some ResourceOwner, while session holds are never so owned. Setting the
locallock struct's owner link to NULL thus denotes a session hold, and
there is no redundant marker for that.
PREPARE TRANSACTION now works again when there are session-level advisory
locks, and it is also able to transfer transactional advisory locks to the
prepared transaction, but for implementation reasons it throws an error if
we hold both types of lock on a single lockable object. Perhaps it will be
worth improving that someday.
Assorted other minor cleanup and documentation editing, as well.
Back-patch to 9.1, except that in the 9.1 branch I did not remove the
LockMethodData.transactional flag for fear of causing an ABI break for
any external code that might be examining those structs.
2012-05-04 23:43:27 +02:00
|
|
|
that are an awkward fit for the MVCC model.
|
|
|
|
For example, a common use of advisory locks is to emulate pessimistic
|
2017-10-09 03:44:17 +02:00
|
|
|
locking strategies typical of so-called <quote>flat file</quote> data
|
Overdue code review for transaction-level advisory locks patch.
Commit 62c7bd31c8878dd45c9b9b2429ab7a12103f3590 had assorted problems, most
visibly that it broke PREPARE TRANSACTION in the presence of session-level
advisory locks (which should be ignored by PREPARE), as per a recent
complaint from Stephen Rees. More abstractly, the patch made the
LockMethodData.transactional flag not merely useless but outright
dangerous, because in point of fact that flag no longer tells you anything
at all about whether a lock is held transactionally. This fix therefore
removes that flag altogether. We now rely entirely on the convention
already in use in lock.c that transactional lock holds must be owned by
some ResourceOwner, while session holds are never so owned. Setting the
locallock struct's owner link to NULL thus denotes a session hold, and
there is no redundant marker for that.
PREPARE TRANSACTION now works again when there are session-level advisory
locks, and it is also able to transfer transactional advisory locks to the
prepared transaction, but for implementation reasons it throws an error if
we hold both types of lock on a single lockable object. Perhaps it will be
worth improving that someday.
Assorted other minor cleanup and documentation editing, as well.
Back-patch to 9.1, except that in the 9.1 branch I did not remove the
LockMethodData.transactional flag for fear of causing an ABI break for
any external code that might be examining those structs.
2012-05-04 23:43:27 +02:00
|
|
|
management systems.
|
|
|
|
While a flag stored in a table could be used for the same purpose,
|
|
|
|
advisory locks are faster, avoid table bloat, and are automatically
|
|
|
|
cleaned up by the server at the end of the session.
|
|
|
|
</para>
|
2011-02-18 06:04:34 +01:00
|
|
|
|
|
|
|
<para>
|
Overdue code review for transaction-level advisory locks patch.
Commit 62c7bd31c8878dd45c9b9b2429ab7a12103f3590 had assorted problems, most
visibly that it broke PREPARE TRANSACTION in the presence of session-level
advisory locks (which should be ignored by PREPARE), as per a recent
complaint from Stephen Rees. More abstractly, the patch made the
LockMethodData.transactional flag not merely useless but outright
dangerous, because in point of fact that flag no longer tells you anything
at all about whether a lock is held transactionally. This fix therefore
removes that flag altogether. We now rely entirely on the convention
already in use in lock.c that transactional lock holds must be owned by
some ResourceOwner, while session holds are never so owned. Setting the
locallock struct's owner link to NULL thus denotes a session hold, and
there is no redundant marker for that.
PREPARE TRANSACTION now works again when there are session-level advisory
locks, and it is also able to transfer transactional advisory locks to the
prepared transaction, but for implementation reasons it throws an error if
we hold both types of lock on a single lockable object. Perhaps it will be
worth improving that someday.
Assorted other minor cleanup and documentation editing, as well.
Back-patch to 9.1, except that in the 9.1 branch I did not remove the
LockMethodData.transactional flag for fear of causing an ABI break for
any external code that might be examining those structs.
2012-05-04 23:43:27 +02:00
|
|
|
There are two ways to acquire an advisory lock in
|
|
|
|
<productname>PostgreSQL</productname>: at session level or at
|
|
|
|
transaction level.
|
|
|
|
Once acquired at session level, an advisory lock is held until
|
|
|
|
explicitly released or the session ends. Unlike standard lock requests,
|
|
|
|
session-level advisory lock requests do not honor transaction semantics:
|
|
|
|
a lock acquired during a transaction that is later rolled back will still
|
|
|
|
be held following the rollback, and likewise an unlock is effective even
|
|
|
|
if the calling transaction fails later. A lock can be acquired multiple
|
|
|
|
times by its owning process; for each completed lock request there must
|
|
|
|
be a corresponding unlock request before the lock is actually released.
|
|
|
|
Transaction-level lock requests, on the other hand, behave more like
|
|
|
|
regular lock requests: they are automatically released at the end of the
|
|
|
|
transaction, and there is no explicit unlock operation. This behavior
|
|
|
|
is often more convenient than the session-level behavior for short-term
|
|
|
|
usage of an advisory lock.
|
|
|
|
Session-level and transaction-level lock requests for the same advisory
|
|
|
|
lock identifier will block each other in the expected way.
|
|
|
|
If a session already holds a given advisory lock, additional requests by
|
|
|
|
it will always succeed, even if other sessions are awaiting the lock; this
|
|
|
|
statement is true regardless of whether the existing lock hold and new
|
|
|
|
request are at session level or transaction level.
|
2006-09-21 01:43:22 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
Overdue code review for transaction-level advisory locks patch.
Commit 62c7bd31c8878dd45c9b9b2429ab7a12103f3590 had assorted problems, most
visibly that it broke PREPARE TRANSACTION in the presence of session-level
advisory locks (which should be ignored by PREPARE), as per a recent
complaint from Stephen Rees. More abstractly, the patch made the
LockMethodData.transactional flag not merely useless but outright
dangerous, because in point of fact that flag no longer tells you anything
at all about whether a lock is held transactionally. This fix therefore
removes that flag altogether. We now rely entirely on the convention
already in use in lock.c that transactional lock holds must be owned by
some ResourceOwner, while session holds are never so owned. Setting the
locallock struct's owner link to NULL thus denotes a session hold, and
there is no redundant marker for that.
PREPARE TRANSACTION now works again when there are session-level advisory
locks, and it is also able to transfer transactional advisory locks to the
prepared transaction, but for implementation reasons it throws an error if
we hold both types of lock on a single lockable object. Perhaps it will be
worth improving that someday.
Assorted other minor cleanup and documentation editing, as well.
Back-patch to 9.1, except that in the 9.1 branch I did not remove the
LockMethodData.transactional flag for fear of causing an ABI break for
any external code that might be examining those structs.
2012-05-04 23:43:27 +02:00
|
|
|
Like all locks in
|
|
|
|
<productname>PostgreSQL</productname>, a complete list of advisory locks
|
|
|
|
currently held by any session can be found in the <link
|
|
|
|
linkend="view-pg-locks"><structname>pg_locks</structname></link> system
|
|
|
|
view.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Both advisory locks and regular locks are stored in a shared memory
|
|
|
|
pool whose size is defined by the configuration variables
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="guc-max-locks-per-transaction"/> and
|
|
|
|
<xref linkend="guc-max-connections"/>.
|
2006-09-21 01:43:22 +02:00
|
|
|
Care must be taken not to exhaust this
|
2009-04-27 18:27:36 +02:00
|
|
|
memory or the server will be unable to grant any locks at all.
|
2006-09-21 01:43:22 +02:00
|
|
|
This imposes an upper limit on the number of advisory locks
|
|
|
|
grantable by the server, typically in the tens to hundreds of thousands
|
|
|
|
depending on how the server is configured.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
Overdue code review for transaction-level advisory locks patch.
Commit 62c7bd31c8878dd45c9b9b2429ab7a12103f3590 had assorted problems, most
visibly that it broke PREPARE TRANSACTION in the presence of session-level
advisory locks (which should be ignored by PREPARE), as per a recent
complaint from Stephen Rees. More abstractly, the patch made the
LockMethodData.transactional flag not merely useless but outright
dangerous, because in point of fact that flag no longer tells you anything
at all about whether a lock is held transactionally. This fix therefore
removes that flag altogether. We now rely entirely on the convention
already in use in lock.c that transactional lock holds must be owned by
some ResourceOwner, while session holds are never so owned. Setting the
locallock struct's owner link to NULL thus denotes a session hold, and
there is no redundant marker for that.
PREPARE TRANSACTION now works again when there are session-level advisory
locks, and it is also able to transfer transactional advisory locks to the
prepared transaction, but for implementation reasons it throws an error if
we hold both types of lock on a single lockable object. Perhaps it will be
worth improving that someday.
Assorted other minor cleanup and documentation editing, as well.
Back-patch to 9.1, except that in the 9.1 branch I did not remove the
LockMethodData.transactional flag for fear of causing an ABI break for
any external code that might be examining those structs.
2012-05-04 23:43:27 +02:00
|
|
|
In certain cases using advisory locking methods, especially in queries
|
2017-10-09 03:44:17 +02:00
|
|
|
involving explicit ordering and <literal>LIMIT</literal> clauses, care must be
|
2006-09-21 01:43:22 +02:00
|
|
|
taken to control the locks acquired because of the order in which SQL
|
|
|
|
expressions are evaluated. For example:
|
|
|
|
<screen>
|
|
|
|
SELECT pg_advisory_lock(id) FROM foo WHERE id = 12345; -- ok
|
|
|
|
SELECT pg_advisory_lock(id) FROM foo WHERE id > 12345 LIMIT 100; -- danger!
|
|
|
|
SELECT pg_advisory_lock(q.id) FROM
|
|
|
|
(
|
2010-02-24 15:10:24 +01:00
|
|
|
SELECT id FROM foo WHERE id > 12345 LIMIT 100
|
2006-09-21 01:43:22 +02:00
|
|
|
) q; -- ok
|
|
|
|
</screen>
|
|
|
|
In the above queries, the second form is dangerous because the
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>LIMIT</literal> is not guaranteed to be applied before the locking
|
2006-09-21 01:43:22 +02:00
|
|
|
function is executed. This might cause some locks to be acquired
|
|
|
|
that the application was not expecting, and hence would fail to release
|
|
|
|
(until it ends the session).
|
|
|
|
From the point of view of the application, such locks
|
|
|
|
would be dangling, although still viewable in
|
|
|
|
<structname>pg_locks</structname>.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The functions provided to manipulate advisory locks are described in
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="functions-advisory-locks"/>.
|
2006-09-21 01:43:22 +02:00
|
|
|
</para>
|
|
|
|
</sect2>
|
|
|
|
|
2002-05-30 22:45:18 +02:00
|
|
|
</sect1>
|
|
|
|
|
|
|
|
<sect1 id="applevel-consistency">
|
2002-11-11 21:14:04 +01:00
|
|
|
<title>Data Consistency Checks at the Application Level</title>
|
2002-05-30 22:45:18 +02:00
|
|
|
|
|
|
|
<para>
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
It is very difficult to enforce business rules regarding data integrity
|
|
|
|
using Read Committed transactions because the view of the data is
|
|
|
|
shifting with each statement, and even a single statement may not
|
|
|
|
restrict itself to the statement's snapshot if a write conflict occurs.
|
2002-05-30 22:45:18 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
While a Repeatable Read transaction has a stable view of the data
|
|
|
|
throughout its execution, there is a subtle issue with using
|
|
|
|
<acronym>MVCC</acronym> snapshots for data consistency checks, involving
|
|
|
|
something known as <firstterm>read/write conflicts</firstterm>.
|
|
|
|
If one transaction writes data and a concurrent transaction attempts
|
|
|
|
to read the same data (whether before or after the write), it cannot
|
|
|
|
see the work of the other transaction. The reader then appears to have
|
|
|
|
executed first regardless of which started first or which committed
|
|
|
|
first. If that is as far as it goes, there is no problem, but
|
|
|
|
if the reader also writes data which is read by a concurrent transaction
|
|
|
|
there is now a transaction which appears to have run before either of
|
|
|
|
the previously mentioned transactions. If the transaction which appears
|
|
|
|
to have executed last actually commits first, it is very easy for a
|
|
|
|
cycle to appear in a graph of the order of execution of the transactions.
|
|
|
|
When such a cycle appears, integrity checks will not work correctly
|
|
|
|
without some help.
|
2002-05-30 22:45:18 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-11-23 15:39:47 +01:00
|
|
|
As mentioned in <xref linkend="xact-serializable"/>, Serializable
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
transactions are just Repeatable Read transactions which add
|
2013-04-19 05:35:19 +02:00
|
|
|
nonblocking monitoring for dangerous patterns of read/write conflicts.
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
When a pattern is detected which could cause a cycle in the apparent
|
|
|
|
order of execution, one of the transactions involved is rolled back to
|
|
|
|
break the cycle.
|
2002-05-30 22:45:18 +02:00
|
|
|
</para>
|
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
<sect2 id="serializable-consistency">
|
|
|
|
<title>Enforcing Consistency With Serializable Transactions</title>
|
2002-05-30 22:45:18 +02:00
|
|
|
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
<para>
|
|
|
|
If the Serializable transaction isolation level is used for all writes
|
|
|
|
and for all reads which need a consistent view of the data, no other
|
|
|
|
effort is required to ensure consistency. Software from other
|
|
|
|
environments which is written to use serializable transactions to
|
|
|
|
ensure consistency should <quote>just work</quote> in this regard in
|
|
|
|
<productname>PostgreSQL</productname>.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
When using this technique, it will avoid creating an unnecessary burden
|
|
|
|
for application programmers if the application software goes through a
|
|
|
|
framework which automatically retries transactions which are rolled
|
|
|
|
back with a serialization failure. It may be a good idea to set
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>default_transaction_isolation</literal> to <literal>serializable</literal>.
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
It would also be wise to take some action to ensure that no other
|
|
|
|
transaction isolation level is used, either inadvertently or to
|
|
|
|
subvert integrity checks, through checks of the transaction isolation
|
|
|
|
level in triggers.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-11-23 15:39:47 +01:00
|
|
|
See <xref linkend="xact-serializable"/> for performance suggestions.
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<warning>
|
|
|
|
<para>
|
|
|
|
This level of integrity protection using Serializable transactions
|
2017-11-23 15:39:47 +01:00
|
|
|
does not yet extend to hot standby mode (<xref linkend="hot-standby"/>).
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
Because of that, those using hot standby may want to use Repeatable
|
2015-04-09 13:35:30 +02:00
|
|
|
Read and explicit locking on the master.
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
</para>
|
|
|
|
</warning>
|
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="non-serializable-consistency">
|
|
|
|
<title>Enforcing Consistency With Explicit Blocking Locks</title>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
When non-serializable writes are possible,
|
|
|
|
to ensure the current validity of a row and protect it against
|
|
|
|
concurrent updates one must use <command>SELECT FOR UPDATE</command>,
|
|
|
|
<command>SELECT FOR SHARE</command>, or an appropriate <command>LOCK
|
|
|
|
TABLE</command> statement. (<command>SELECT FOR UPDATE</command>
|
|
|
|
and <command>SELECT FOR SHARE</command> lock just the
|
|
|
|
returned rows against concurrent updates, while <command>LOCK
|
|
|
|
TABLE</command> locks the whole table.) This should be taken into
|
|
|
|
account when porting applications to
|
|
|
|
<productname>PostgreSQL</productname> from other environments.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Also of note to those converting from other environments is the fact
|
|
|
|
that <command>SELECT FOR UPDATE</command> does not ensure that a
|
|
|
|
concurrent transaction will not update or delete a selected row.
|
|
|
|
To do that in <productname>PostgreSQL</productname> you must actually
|
|
|
|
update the row, even if no values need to be changed.
|
|
|
|
<command>SELECT FOR UPDATE</command> <emphasis>temporarily blocks</emphasis>
|
|
|
|
other transactions from acquiring the same lock or executing an
|
|
|
|
<command>UPDATE</command> or <command>DELETE</command> which would
|
|
|
|
affect the locked row, but once the transaction holding this lock
|
|
|
|
commits or rolls back, a blocked transaction will proceed with the
|
|
|
|
conflicting operation unless an actual <command>UPDATE</command> of
|
|
|
|
the row was performed while the lock was held.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Global validity checks require extra thought under
|
|
|
|
non-serializable <acronym>MVCC</acronym>.
|
|
|
|
For example, a banking application might wish to check that the sum of
|
|
|
|
all credits in one table equals the sum of debits in another table,
|
|
|
|
when both tables are being actively updated. Comparing the results of two
|
|
|
|
successive <literal>SELECT sum(...)</literal> commands will not work reliably in
|
|
|
|
Read Committed mode, since the second query will likely include the results
|
|
|
|
of transactions not counted by the first. Doing the two sums in a
|
|
|
|
single repeatable read transaction will give an accurate picture of only the
|
|
|
|
effects of transactions that committed before the repeatable read transaction
|
|
|
|
started — but one might legitimately wonder whether the answer is still
|
|
|
|
relevant by the time it is delivered. If the repeatable read transaction
|
|
|
|
itself applied some changes before trying to make the consistency check,
|
|
|
|
the usefulness of the check becomes even more debatable, since now it
|
|
|
|
includes some but not all post-transaction-start changes. In such cases
|
|
|
|
a careful person might wish to lock all tables needed for the check,
|
|
|
|
in order to get an indisputable picture of current reality. A
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>SHARE</literal> mode (or higher) lock guarantees that there are no
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
uncommitted changes in the locked table, other than those of the current
|
|
|
|
transaction.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Note also that if one is relying on explicit locking to prevent concurrent
|
|
|
|
changes, one should either use Read Committed mode, or in Repeatable Read
|
|
|
|
mode be careful to obtain
|
|
|
|
locks before performing queries. A lock obtained by a
|
|
|
|
repeatable read transaction guarantees that no other transactions modifying
|
|
|
|
the table are still running, but if the snapshot seen by the
|
|
|
|
transaction predates obtaining the lock, it might predate some now-committed
|
|
|
|
changes in the table. A repeatable read transaction's snapshot is actually
|
|
|
|
frozen at the start of its first query or data-modification command
|
2017-10-09 03:44:17 +02:00
|
|
|
(<literal>SELECT</literal>, <literal>INSERT</literal>,
|
|
|
|
<literal>UPDATE</literal>, or <literal>DELETE</literal>), so
|
Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-07 22:46:51 +01:00
|
|
|
it is possible to obtain locks explicitly before the snapshot is
|
|
|
|
frozen.
|
|
|
|
</para>
|
|
|
|
</sect2>
|
1999-05-26 19:27:39 +02:00
|
|
|
</sect1>
|
|
|
|
|
2015-08-15 19:30:16 +02:00
|
|
|
<sect1 id="mvcc-caveats">
|
|
|
|
<title>Caveats</title>
|
|
|
|
|
|
|
|
<para>
|
2017-11-23 15:39:47 +01:00
|
|
|
Some DDL commands, currently only <xref linkend="sql-truncate"/> and the
|
|
|
|
table-rewriting forms of <xref linkend="sql-altertable"/>, are not
|
2015-08-15 19:30:16 +02:00
|
|
|
MVCC-safe. This means that after the truncation or rewrite commits, the
|
|
|
|
table will appear empty to concurrent transactions, if they are using a
|
|
|
|
snapshot taken before the DDL command committed. This will only be an
|
|
|
|
issue for a transaction that did not access the table in question
|
|
|
|
before the DDL command started — any transaction that has done so
|
|
|
|
would hold at least an <literal>ACCESS SHARE</literal> table lock,
|
|
|
|
which would block the DDL command until that transaction completes.
|
|
|
|
So these commands will not cause any apparent inconsistency in the
|
|
|
|
table contents for successive queries on the target table, but they
|
|
|
|
could cause visible inconsistency between the contents of the target
|
|
|
|
table and other tables in the database.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Support for the Serializable transaction isolation level has not yet
|
|
|
|
been added to Hot Standby replication targets (described in
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="hot-standby"/>). The strictest isolation level currently
|
2015-08-15 19:30:16 +02:00
|
|
|
supported in hot standby mode is Repeatable Read. While performing all
|
|
|
|
permanent database writes within Serializable transactions on the
|
|
|
|
master will ensure that all standbys will eventually reach a consistent
|
|
|
|
state, a Repeatable Read transaction run on the standby can sometimes
|
|
|
|
see a transient state that is inconsistent with any serial execution
|
|
|
|
of the transactions on the master.
|
|
|
|
</para>
|
|
|
|
</sect1>
|
|
|
|
|
2001-05-17 23:50:18 +02:00
|
|
|
<sect1 id="locking-indexes">
|
|
|
|
<title>Locking and Indexes</title>
|
1999-05-26 19:27:39 +02:00
|
|
|
|
2003-08-31 19:32:24 +02:00
|
|
|
<indexterm zone="locking-indexes">
|
|
|
|
<primary>index</primary>
|
|
|
|
<secondary>locks</secondary>
|
|
|
|
</indexterm>
|
|
|
|
|
1999-05-26 19:27:39 +02:00
|
|
|
<para>
|
2001-11-21 06:53:41 +01:00
|
|
|
Though <productname>PostgreSQL</productname>
|
2000-10-11 19:38:36 +02:00
|
|
|
provides nonblocking read/write access to table
|
2009-06-17 23:58:49 +02:00
|
|
|
data, nonblocking read/write access is not currently offered for every
|
2000-10-11 19:38:36 +02:00
|
|
|
index access method implemented
|
2001-11-21 06:53:41 +01:00
|
|
|
in <productname>PostgreSQL</productname>.
|
1999-05-26 19:27:39 +02:00
|
|
|
The various index types are handled as follows:
|
|
|
|
|
|
|
|
<variablelist>
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
2011-12-17 22:41:16 +01:00
|
|
|
B-tree, <acronym>GiST</acronym> and <acronym>SP-GiST</acronym> indexes
|
1999-05-26 19:27:39 +02:00
|
|
|
</term>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2005-10-21 03:41:28 +02:00
|
|
|
Short-term share/exclusive page-level locks are used for
|
|
|
|
read/write access. Locks are released immediately after each
|
|
|
|
index row is fetched or inserted. These index types provide
|
|
|
|
the highest concurrency without deadlock conditions.
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
2005-10-21 03:41:28 +02:00
|
|
|
Hash indexes
|
1999-05-26 19:27:39 +02:00
|
|
|
</term>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2005-10-21 03:41:28 +02:00
|
|
|
Share/exclusive hash-bucket-level locks are used for read/write
|
|
|
|
access. Locks are released after the whole bucket is processed.
|
|
|
|
Bucket-level locks provide better concurrency than index-level
|
|
|
|
ones, but deadlock is possible since the locks are held longer
|
|
|
|
than one index operation.
|
1999-05-26 19:27:39 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
2006-09-14 13:16:27 +02:00
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>
|
|
|
|
<acronym>GIN</acronym> indexes
|
|
|
|
</term>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2006-09-21 01:43:22 +02:00
|
|
|
Short-term share/exclusive page-level locks are used for
|
|
|
|
read/write access. Locks are released immediately after each
|
2009-06-17 23:58:49 +02:00
|
|
|
index row is fetched or inserted. But note that insertion of a
|
|
|
|
GIN-indexed value usually produces several index key insertions
|
Update documentation on may/can/might:
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
Also update two error messages mentioned in the documenation to match.
2007-01-31 21:56:20 +01:00
|
|
|
per row, so GIN might do substantial work for a single value's
|
2006-09-21 01:43:22 +02:00
|
|
|
insertion.
|
2006-09-14 13:16:27 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
1999-05-26 19:27:39 +02:00
|
|
|
</variablelist>
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2005-10-21 03:41:28 +02:00
|
|
|
Currently, B-tree indexes offer the best performance for concurrent
|
2002-12-18 21:40:24 +01:00
|
|
|
applications; since they also have more features than hash
|
|
|
|
indexes, they are the recommended index type for concurrent
|
|
|
|
applications that need to index scalar data. When dealing with
|
2011-12-17 22:41:16 +01:00
|
|
|
non-scalar data, B-trees are not useful, and GiST, SP-GiST or GIN
|
|
|
|
indexes should be used instead.
|
2003-01-11 01:00:03 +01:00
|
|
|
</para>
|
1999-05-26 19:27:39 +02:00
|
|
|
</sect1>
|
|
|
|
</chapter>
|