2006-11-22 05:01:40 +01:00
|
|
|
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.11 2006/11/22 04:01:40 momjian Exp $ -->
|
2006-10-26 17:32:45 +02:00
|
|
|
|
2006-11-17 17:38:44 +01:00
|
|
|
<chapter id="high-availability">
|
|
|
|
<title>High Availability and Load Balancing</title>
|
2006-10-26 17:32:45 +02:00
|
|
|
|
2006-11-17 17:38:44 +01:00
|
|
|
<indexterm><primary>high availability</></>
|
2006-10-26 17:32:45 +02:00
|
|
|
<indexterm><primary>failover</></>
|
|
|
|
<indexterm><primary>replication</></>
|
|
|
|
<indexterm><primary>load balancing</></>
|
|
|
|
<indexterm><primary>clustering</></>
|
2006-11-17 17:38:44 +01:00
|
|
|
<indexterm><primary>data partitioning</></>
|
2006-10-26 17:32:45 +02:00
|
|
|
|
|
|
|
<para>
|
2006-11-17 05:52:46 +01:00
|
|
|
Database servers can work together to allow a second server to
|
2006-11-17 17:38:44 +01:00
|
|
|
quickly take over quickly if the primary server fails (high
|
|
|
|
availability), or to allow several computers to serve the same
|
|
|
|
data (load balancing). Ideally, database servers could work
|
|
|
|
together seamlessly. Web servers serving static web pages can
|
|
|
|
be combined quite easily by merely load-balancing web requests
|
|
|
|
to multiple machines. In fact, read-only database servers can
|
|
|
|
be combined relatively easily too. Unfortunately, most database
|
|
|
|
servers have a read/write mix of requests, and read/write servers
|
|
|
|
are much harder to combine. This is because though read-only
|
|
|
|
data needs to be placed on each server only once, a write to any
|
|
|
|
server has to be propagated to all servers so that future read
|
|
|
|
requests to those servers return consistent results.
|
2006-10-26 17:32:45 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2006-11-17 17:38:44 +01:00
|
|
|
This synchronization problem is the fundamental difficulty for
|
|
|
|
servers working together. Because there is no single solution
|
|
|
|
that eliminates the impact of the sync problem for all use cases,
|
|
|
|
there are multiple solutions. Each solution addresses this
|
|
|
|
problem in a different way, and minimizes its impact for a specific
|
|
|
|
workload.
|
2006-10-26 17:32:45 +02:00
|
|
|
</para>
|
|
|
|
|
2006-11-16 19:25:58 +01:00
|
|
|
<para>
|
|
|
|
Some solutions deal with synchronization by allowing only one
|
|
|
|
server to modify the data. Servers that can modify data are
|
2006-11-17 05:52:46 +01:00
|
|
|
called read/write or "master" servers. Servers that can reply
|
|
|
|
to read-only queries are called "slave" servers. Servers that
|
|
|
|
cannot be accessed until they are changed to master servers are
|
|
|
|
called "standby" servers.
|
2006-11-16 19:25:58 +01:00
|
|
|
</para>
|
|
|
|
|
2006-10-26 17:32:45 +02:00
|
|
|
<para>
|
2006-11-21 19:15:45 +01:00
|
|
|
Some failover and load balancing solutions are synchronous,
|
|
|
|
meaning that a data-modifying transaction is not considered
|
|
|
|
committed until all servers have committed the transaction. This
|
|
|
|
guarantees that a failover will not lose any data and that all
|
|
|
|
load-balanced servers will return consistent results with little
|
|
|
|
propagation delay. Asynchronous updating has a delay between the
|
|
|
|
time of commit and its propagation to the other servers, opening
|
|
|
|
the possibility that some transactions might be lost in the switch
|
|
|
|
to a backup server, and that load balanced servers might return
|
|
|
|
slightly stale results. Asynchronous communication is used when
|
|
|
|
synchronous would be too slow.
|
2006-10-26 17:32:45 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Solutions can also be categorized by their granularity. Some solutions
|
|
|
|
can deal only with an entire database server, while others allow control
|
|
|
|
at the per-table or per-database level.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Performance must be considered in any failover or load balancing
|
|
|
|
choice. There is usually a tradeoff between functionality and
|
|
|
|
performance. For example, a full synchronous solution over a slow
|
|
|
|
network might cut performance by more than half, while an asynchronous
|
|
|
|
one might have a minimal performance impact.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2006-10-27 14:40:26 +02:00
|
|
|
The remainder of this section outlines various failover, replication,
|
2006-10-26 17:32:45 +02:00
|
|
|
and load balancing solutions.
|
|
|
|
</para>
|
|
|
|
|
2006-11-16 22:43:33 +01:00
|
|
|
<variablelist>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>Shared Disk Failover</term>
|
|
|
|
<listitem>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Shared disk failover avoids synchronization overhead by having only one
|
|
|
|
copy of the database. It uses a single disk array that is shared by
|
2006-11-17 05:52:46 +01:00
|
|
|
multiple servers. If the main database server fails, the standby server
|
2006-11-16 22:43:33 +01:00
|
|
|
is able to mount and start the database as though it was recovering from
|
|
|
|
a database crash. This allows rapid failover with no data loss.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2006-11-17 05:52:46 +01:00
|
|
|
Shared hardware functionality is common in network storage
|
|
|
|
devices. Using a network file system is also possible, though
|
|
|
|
care must be taken that the file system has full POSIX behavior.
|
|
|
|
One significant limitation of this method is that if the shared
|
|
|
|
disk array fails or becomes corrupt, the primary and standby
|
|
|
|
servers are both nonfunctional. Another issue is that the
|
|
|
|
standby server should never access the shared storage while
|
2006-11-20 22:26:22 +01:00
|
|
|
the primary server is running. It is also possible to use
|
|
|
|
some type of file system mirroring to keep the standby server
|
|
|
|
current, but the mirroring must be done in a way that the
|
|
|
|
standby server has a consistent copy of the file system.
|
2006-11-16 22:43:33 +01:00
|
|
|
</para>
|
2006-11-21 19:15:45 +01:00
|
|
|
|
|
|
|
<!--
|
|
|
|
https://forge.continuent.org/pipermail/sequoia/2006-November/004070.html
|
|
|
|
|
|
|
|
Oracle RAC is a shared disk approach and just send cache invalidations
|
|
|
|
to other nodes but not actual data. As the disk is shared, data is
|
|
|
|
only commited once to disk and there is a distributed locking
|
|
|
|
protocol to make nodes agree on a serializable transactional order.
|
|
|
|
-->
|
|
|
|
|
2006-11-16 22:43:33 +01:00
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>Warm Standby Using Point-In-Time Recovery</term>
|
|
|
|
<listitem>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
A warm standby server (see <xref linkend="warm-standby">) can
|
|
|
|
be kept current by reading a stream of write-ahead log (WAL)
|
|
|
|
records. If the main server fails, the warm standby contains
|
|
|
|
almost all of the data of the main server, and can be quickly
|
|
|
|
made the new master database server. This is asynchronous and
|
|
|
|
can only be done for the entire database server.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
2006-11-22 05:01:40 +01:00
|
|
|
<term>Master-Slave Replication</term>
|
2006-11-16 22:43:33 +01:00
|
|
|
<listitem>
|
|
|
|
|
|
|
|
<para>
|
2006-11-22 05:01:40 +01:00
|
|
|
A master-slave replication setup sends all data modification
|
2006-11-17 10:00:03 +01:00
|
|
|
queries to the master server. The master server asynchronously
|
2006-11-17 05:52:46 +01:00
|
|
|
sends data changes to the slave server. The slave can answer
|
|
|
|
read-only queries while the master server is running. The
|
|
|
|
slave server is ideal for data warehouse queries.
|
2006-11-16 22:43:33 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Slony-I is an example of this type of replication, with per-table
|
2006-11-17 05:52:46 +01:00
|
|
|
granularity, and support for multiple slaves. Because it
|
|
|
|
updates the slave server asynchronously (in batches), there is
|
|
|
|
possible data loss during fail over.
|
2006-11-16 22:43:33 +01:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
2006-11-20 23:07:56 +01:00
|
|
|
<term>Statement-Based Replication Middleware</term>
|
2006-11-16 22:43:33 +01:00
|
|
|
<listitem>
|
|
|
|
|
|
|
|
<para>
|
2006-11-20 23:07:56 +01:00
|
|
|
With statement-based replication middleware, a program intercepts
|
|
|
|
every SQL query and sends it to all servers. Each server
|
|
|
|
operates independently. Read-only queries can be sent to a
|
|
|
|
single server because there is no need for all servers to
|
|
|
|
process it.
|
2006-11-16 22:43:33 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2006-11-21 19:31:57 +01:00
|
|
|
If queries are simply broadcast unmodified, functions like
|
2006-11-16 22:43:33 +01:00
|
|
|
<function>random()</>, <function>CURRENT_TIMESTAMP</>, and
|
2006-11-21 19:31:57 +01:00
|
|
|
sequences would have different values on different servers.
|
|
|
|
This is because each server operates independently, and because
|
|
|
|
SQL queries are broadcast (and not actual modified rows). If
|
|
|
|
this is unacceptable, either the middleware or the application
|
|
|
|
must query such values from a single server and then use those
|
|
|
|
values in write queries. Also, care must be taken that all
|
|
|
|
transactions either commit or abort on all servers, perhaps
|
|
|
|
using two-phase commit (<xref linkend="sql-prepare-transaction"
|
2006-11-16 22:45:25 +01:00
|
|
|
endterm="sql-prepare-transaction-title"> and <xref
|
|
|
|
linkend="sql-commit-prepared" endterm="sql-commit-prepared-title">.
|
|
|
|
Pgpool is an example of this type of replication.
|
2006-11-16 22:43:33 +01:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
2006-11-22 05:01:40 +01:00
|
|
|
<term>Synchonous Multi-Master Replication</term>
|
2006-11-16 22:43:33 +01:00
|
|
|
<listitem>
|
|
|
|
|
2006-11-17 05:52:46 +01:00
|
|
|
<para>
|
2006-11-22 05:01:40 +01:00
|
|
|
In synchonous multi-master replication, each server can accept
|
|
|
|
write requests, and modified data is transmitted from the
|
|
|
|
original server to every other server before each transaction
|
|
|
|
commits. Heavy write activity can cause excessive locking,
|
|
|
|
leading to poor performance. In fact, write performance is
|
|
|
|
often worse than that of a single server. Read requests can
|
|
|
|
be sent to any server. Some implementations use cluster-wide
|
|
|
|
shared memory or shared disk to reduce the communication
|
|
|
|
overhead. Clustering is best for mostly read workloads, though
|
|
|
|
its big advantage is that any server can accept write requests
|
|
|
|
— there is no need to partition workloads between master
|
|
|
|
and slave servers, and because the data changes are sent from
|
|
|
|
one server to another, there is no problem with non-deterministic
|
|
|
|
functions like <function>random()</>.
|
2006-11-16 22:43:33 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2006-11-21 19:15:45 +01:00
|
|
|
<productname>PostgreSQL</> does not offer this type of load
|
|
|
|
balancing, though <productname>PostgreSQL</> two-phase commit
|
|
|
|
(<xref linkend="sql-prepare-transaction"
|
2006-11-16 22:43:33 +01:00
|
|
|
endterm="sql-prepare-transaction-title"> and <xref
|
|
|
|
linkend="sql-commit-prepared" endterm="sql-commit-prepared-title">)
|
|
|
|
can be used to implement this in application code or middleware.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
2006-11-21 19:15:45 +01:00
|
|
|
<varlistentry>
|
2006-11-22 05:01:40 +01:00
|
|
|
<term>Asynchronous Multi-Master Replication</term>
|
2006-11-21 19:15:45 +01:00
|
|
|
<listitem>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
For servers that are not regularly connected, like laptops or
|
|
|
|
remote servers, keeping data consistent among servers is a
|
|
|
|
challenge. One simple solution is to allow each server to
|
|
|
|
modify the data, and have periodic communication compare
|
|
|
|
databases and ask users to resolve any conflicts.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
2006-11-17 14:29:53 +01:00
|
|
|
<varlistentry>
|
|
|
|
<term>Data Partitioning</term>
|
|
|
|
<listitem>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Data partitioning splits tables into data sets. Each set can
|
|
|
|
be modified by only one server. For example, data can be
|
|
|
|
partitioned by offices, e.g. London and Paris, with a server
|
|
|
|
in each office. If queries combining London and Paris data
|
|
|
|
are necessary, an application can query both servers, or
|
|
|
|
master/slave replication can be used to keep a read-only copy
|
|
|
|
of the other office's data on each server.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
2006-11-16 22:43:33 +01:00
|
|
|
<varlistentry>
|
2006-11-22 05:00:19 +01:00
|
|
|
<term>Multi-Server Parallel Query Execution</term>
|
2006-11-16 22:43:33 +01:00
|
|
|
<listitem>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
This allows multiple servers to work concurrently on a single
|
|
|
|
query. One possible way this could work is for the data to be
|
|
|
|
split among servers and for each server to execute its part of
|
|
|
|
the query and results sent to a central server to be combined
|
2006-11-21 22:37:33 +01:00
|
|
|
and returned to the user. Pgpool-II has this capability.
|
2006-11-16 22:43:33 +01:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
<term>Commercial Solutions</term>
|
|
|
|
<listitem>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Because <productname>PostgreSQL</> is open source and easily
|
|
|
|
extended, a number of companies have taken <productname>PostgreSQL</>
|
|
|
|
and created commercial closed-source solutions with unique
|
|
|
|
failover, replication, and load balancing capabilities.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
</variablelist>
|
2006-10-26 17:32:45 +02:00
|
|
|
|
|
|
|
</chapter>
|