Add functions pg_start_backup, pg_stop_backup to create backup label

and history files as per recent discussion.  While at it, remove
pg_terminate_backend, since we have decided we do not have time during
this release cycle to address the reliability concerns it creates.
Split the 'Miscellaneous Functions' documentation section into
'System Information Functions' and 'System Administration Functions',
which hopefully will draw the eyes of those looking for such things.
This commit is contained in:
Tom Lane 2004-08-03 20:32:36 +00:00
parent a83c45c4c6
commit 58c41712d5
12 changed files with 1352 additions and 949 deletions

View File

@ -1,5 +1,5 @@
<!--
$PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.39 2004/04/22 07:02:35 neilc Exp $
$PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.40 2004/08/03 20:32:30 tgl Exp $
-->
<chapter id="backup">
<title>Backup and Restore</title>
@ -14,12 +14,14 @@ $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.39 2004/04/22 07:02:35 neilc Exp
</para>
<para>
There are two fundamentally different approaches to backing up
There are three fundamentally different approaches to backing up
<productname>PostgreSQL</> data:
<itemizedlist>
<listitem><para><acronym>SQL</> dump</para></listitem>
<listitem><para>File system level backup</para></listitem>
<listitem><para>On-line backup</para></listitem>
</itemizedlist>
Each has its own strengths and weaknesses.
</para>
<sect1 id="backup-dump">
@ -314,8 +316,8 @@ tar -cf backup.tar /usr/local/pgsql/data
The database server <emphasis>must</> be shut down in order to
get a usable backup. Half-way measures such as disallowing all
connections will <emphasis>not</emphasis> work
(<command>tar</command> and similar tools do not take an atomic
snapshot of the state of the filesystem at a point in
(mainly because <command>tar</command> and similar tools do not take an
atomic snapshot of the state of the filesystem at a point in
time). Information about stopping the server can be found in
<xref linkend="postmaster-shutdown">. Needless to say that you
also need to shut down the server before restoring the data.
@ -335,7 +337,8 @@ tar -cf backup.tar /usr/local/pgsql/data
information. Of course it is also impossible to restore only a
table and the associated <filename>pg_clog</filename> data
because that would render all other tables in the database
cluster useless.
cluster useless. So file system backups only work for complete
restoration of an entire database cluster.
</para>
</listitem>
</orderedlist>
@ -355,7 +358,7 @@ tar -cf backup.tar /usr/local/pgsql/data
properly shut down; therefore, when you start the database server
on the backed-up data, it will think the server had crashed
and replay the WAL log. This is not a problem, just be aware of
it.
it (and be sure to include the WAL files in your dump).
</para>
<para>
@ -373,6 +376,70 @@ tar -cf backup.tar /usr/local/pgsql/data
the contents of indexes for example, just the commands to recreate
them.)
</para>
</sect1>
<sect1 id="backup-online">
<title>On-line backup and point-in-time recovery</title>
<para>
At all times, <productname>PostgreSQL</> maintains a <firstterm>write ahead
log</> (WAL) that shows details of every change made to the database's data
files. This log exists primarily for crash-safety purposes: if the system
crashes, the database can be restored to consistency by <quote>replaying</>
the log entries made since the last checkpoint. However, the existence
of the log makes it possible to use a third strategy for backing up
databases: we can combine a filesystem-level backup with backup of the WAL
files. If recovery is needed, we restore the backup and then replay from
the backed-up WAL files to bring the backup up to current time. This
approach is notably more complex to administer than either of the previous
approaches, but it has some significant benefits to offer:
<itemizedlist>
<listitem>
<para>
We do not need a perfectly consistent backup as the starting point.
Any internal inconsistency in the backup will be corrected by log
replay (this is not significantly different from what happens during
crash recovery). So we don't need filesystem snapshot capability,
just <application>tar</> or a similar archiving tool.
</para>
</listitem>
<listitem>
<para>
Since we can string together an indefinitely long sequence of WAL files
for replay, continuous backup can be had simply by continuing to archive
the WAL files. This is particularly valuable for large databases, where
making a full backup may take an unreasonable amount of time.
</para>
</listitem>
<listitem>
<para>
There is nothing that says we have to replay the WAL entries all the
way to the end. We could stop the replay at any point and have a
consistent snapshot of the database as it was at that time. Thus,
this technique supports <firstterm>point-in-time recovery</>: it is
possible to restore the database to its state at any time since your base
backup was taken.
</para>
</listitem>
<listitem>
<para>
If we continuously feed the series of WAL files to another machine
that's been loaded with the same base backup, we have a <quote>hot
standby</> system: at any point we can bring up the second machine
and it will have a nearly-current copy of the database.
</para>
</listitem>
</itemizedlist>
</para>
<para>
As with the plain filesystem-backup technique, this method can only
support restoration of an entire database cluster, not a subset.
Also, it requires a lot of archival storage: the base backup is bulky,
and a busy system will generate many megabytes of WAL traffic that
have to be archived. Still, it is the preferred backup technique in
many situations where high reliability is needed.
</para>
</sect1>
@ -393,16 +460,16 @@ tar -cf backup.tar /usr/local/pgsql/data
change between major releases of <productname>PostgreSQL</> (where
the number after the first dot changes). This does not apply to
different minor releases under the same major release (where the
number of the second dot changes); these always have compatible
number after the second dot changes); these always have compatible
storage formats. For example, releases 7.0.1, 7.1.2, and 7.2 are
not compatible, whereas 7.1.1 and 7.1.2 are. When you update
between compatible versions, then you can simply reuse the data
area in disk by the new executables. Otherwise you need to
between compatible versions, you can simply replace the executables
and reuse the data area on disk. Otherwise you need to
<quote>back up</> your data and <quote>restore</> it on the new
server, using <application>pg_dump</>. (There are checks in place
that prevent you from doing the wrong thing, so no harm can be done
by confusing these things.) The precise installation procedure is
not subject of this section; these details are in <xref
not the subject of this section; those details are in <xref
linkend="installation">.
</para>
@ -427,7 +494,7 @@ pg_dumpall -p 5432 | psql -d template1 -p 6543
<para>
If you cannot or do not want to run two servers in parallel you can
do the back up step before installing the new version, bring down
do the backup step before installing the new version, bring down
the server, move the old version out of the way, install the new
version, start the new server, restore the data. For example:
@ -447,6 +514,14 @@ psql template1 < backup
you of strategic places to perform these steps.
</para>
<para>
You will always need a SQL dump (<application>pg_dump</> dump) for
migrating to a new release. Filesystem-level backups (including
on-line backups) will not work, for the same reason that you can't
just do the update in-place: the file formats won't necessarily be
compatible across major releases.
</para>
<note>
<para>
When you <quote>move the old installation out of the way</quote>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/ddl.sgml,v 1.26 2004/03/07 04:31:01 neilc Exp $ -->
<!-- $PostgreSQL: pgsql/doc/src/sgml/ddl.sgml,v 1.27 2004/08/03 20:32:30 tgl Exp $ -->
<chapter id="ddl">
<title>Data Definition</title>
@ -1723,7 +1723,7 @@ SET search_path TO myschema;
</para>
<para>
See also <xref linkend="functions-misc"> for other ways to access
See also <xref linkend="functions-info"> for other ways to access
the schema search path.
</para>

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
<!--
$PostgreSQL: pgsql/doc/src/sgml/ref/set.sgml,v 1.84 2003/11/29 19:51:39 pgsql Exp $
$PostgreSQL: pgsql/doc/src/sgml/ref/set.sgml,v 1.85 2004/08/03 20:32:32 tgl Exp $
PostgreSQL documentation
-->
@ -229,7 +229,7 @@ SELECT setseed(<replaceable>value</replaceable>);
<para>
The function <function>set_config</function> provides equivalent
functionality. See <xref linkend="functions-misc">.
functionality. See <xref linkend="functions-admin">.
</para>
</refsect1>

View File

@ -1,5 +1,5 @@
<!--
$PostgreSQL: pgsql/doc/src/sgml/ref/show.sgml,v 1.35 2004/01/06 17:26:23 neilc Exp $
$PostgreSQL: pgsql/doc/src/sgml/ref/show.sgml,v 1.36 2004/08/03 20:32:32 tgl Exp $
PostgreSQL documentation
-->
@ -130,7 +130,7 @@ SHOW ALL
<para>
The function <function>current_setting</function> produces
equivalent output. See <xref linkend="functions-misc">.
equivalent output. See <xref linkend="functions-admin">.
</para>
</refsect1>

View File

@ -7,7 +7,7 @@
* Portions Copyright (c) 1996-2003, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* $PostgreSQL: pgsql/src/backend/access/transam/xlog.c,v 1.153 2004/08/01 17:45:42 tgl Exp $
* $PostgreSQL: pgsql/src/backend/access/transam/xlog.c,v 1.154 2004/08/03 20:32:32 tgl Exp $
*
*-------------------------------------------------------------------------
*/
@ -5048,3 +5048,233 @@ issue_xlog_fsync(void)
break;
}
}
/*
* pg_start_backup: set up for taking an on-line backup dump
*
* Essentially what this does is to create a backup label file in $PGDATA,
* where it will be archived as part of the backup dump. The label file
* contains the user-supplied label string (typically this would be used
* to tell where the backup dump will be stored) and the starting time and
* starting WAL offset for the dump.
*/
Datum
pg_start_backup(PG_FUNCTION_ARGS)
{
text *backupid = PG_GETARG_TEXT_P(0);
text *result;
char *backupidstr;
XLogRecPtr startpoint;
time_t stamp_time;
char strfbuf[128];
char labelfilename[MAXPGPATH];
char xlogfilename[MAXFNAMELEN];
uint32 _logId;
uint32 _logSeg;
struct stat stat_buf;
FILE *fp;
if (!superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
(errmsg("must be superuser to run a backup"))));
backupidstr = DatumGetCString(DirectFunctionCall1(textout,
PointerGetDatum(backupid)));
/*
* The oldest point in WAL that would be needed to restore starting from
* the most recent checkpoint is precisely the RedoRecPtr.
*/
startpoint = GetRedoRecPtr();
XLByteToSeg(startpoint, _logId, _logSeg);
XLogFileName(xlogfilename, ThisTimeLineID, _logId, _logSeg);
/*
* We deliberately use strftime/localtime not the src/timezone functions,
* so that backup labels will consistently be recorded in the same
* timezone regardless of TimeZone setting. This matches elog.c's
* practice.
*/
stamp_time = time(NULL);
strftime(strfbuf, sizeof(strfbuf),
"%Y-%m-%d %H:%M:%S %Z",
localtime(&stamp_time));
/*
* Check for existing backup label --- implies a backup is already running
*/
snprintf(labelfilename, MAXPGPATH, "%s/backup_label", DataDir);
if (stat(labelfilename, &stat_buf) != 0)
{
if (errno != ENOENT)
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not stat \"%s\": %m",
labelfilename)));
}
else
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("a backup is already in progress"),
errhint("If you're sure there is no backup in progress, remove file \"%s\" and try again.",
labelfilename)));
/*
* Okay, write the file
*/
fp = AllocateFile(labelfilename, "w");
if (!fp)
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not create file \"%s\": %m",
labelfilename)));
fprintf(fp, "START WAL LOCATION: %X/%X (file %s)\n",
startpoint.xlogid, startpoint.xrecoff, xlogfilename);
fprintf(fp, "START TIME: %s\n", strfbuf);
fprintf(fp, "LABEL: %s\n", backupidstr);
if (fflush(fp) || ferror(fp) || FreeFile(fp))
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not write file \"%s\": %m",
labelfilename)));
/*
* We're done. As a convenience, return the starting WAL offset.
*/
snprintf(xlogfilename, sizeof(xlogfilename), "%X/%X",
startpoint.xlogid, startpoint.xrecoff);
result = DatumGetTextP(DirectFunctionCall1(textin,
CStringGetDatum(xlogfilename)));
PG_RETURN_TEXT_P(result);
}
/*
* pg_stop_backup: finish taking an on-line backup dump
*
* We remove the backup label file created by pg_start_backup, and instead
* create a backup history file in pg_xlog (whence it will immediately be
* archived). The backup history file contains the same info found in
* the label file, plus the backup-end time and WAL offset.
*/
Datum
pg_stop_backup(PG_FUNCTION_ARGS)
{
text *result;
XLogCtlInsert *Insert = &XLogCtl->Insert;
XLogRecPtr startpoint;
XLogRecPtr stoppoint;
time_t stamp_time;
char strfbuf[128];
char labelfilename[MAXPGPATH];
char histfilename[MAXPGPATH];
char startxlogfilename[MAXFNAMELEN];
char stopxlogfilename[MAXFNAMELEN];
uint32 _logId;
uint32 _logSeg;
FILE *lfp;
FILE *fp;
char ch;
int ich;
if (!superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
(errmsg("must be superuser to run a backup"))));
/*
* Get the current end-of-WAL position; it will be unsafe to use this
* dump to restore to a point in advance of this time.
*/
LWLockAcquire(WALInsertLock, LW_EXCLUSIVE);
INSERT_RECPTR(stoppoint, Insert, Insert->curridx);
LWLockRelease(WALInsertLock);
XLByteToSeg(stoppoint, _logId, _logSeg);
XLogFileName(stopxlogfilename, ThisTimeLineID, _logId, _logSeg);
/*
* We deliberately use strftime/localtime not the src/timezone functions,
* so that backup labels will consistently be recorded in the same
* timezone regardless of TimeZone setting. This matches elog.c's
* practice.
*/
stamp_time = time(NULL);
strftime(strfbuf, sizeof(strfbuf),
"%Y-%m-%d %H:%M:%S %Z",
localtime(&stamp_time));
/*
* Open the existing label file
*/
snprintf(labelfilename, MAXPGPATH, "%s/backup_label", DataDir);
lfp = AllocateFile(labelfilename, "r");
if (!lfp)
{
if (errno != ENOENT)
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not read file \"%s\": %m",
labelfilename)));
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("a backup is not in progress")));
}
/*
* Read and parse the START WAL LOCATION line (this code is pretty
* crude, but we are not expecting any variability in the file format).
*/
if (fscanf(lfp, "START WAL LOCATION: %X/%X (file %24s)%c",
&startpoint.xlogid, &startpoint.xrecoff, startxlogfilename,
&ch) != 4 || ch != '\n')
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("invalid data in file \"%s\"", labelfilename)));
/*
* Write the backup history file
*/
XLByteToSeg(startpoint, _logId, _logSeg);
BackupHistoryFilePath(histfilename, ThisTimeLineID, _logId, _logSeg,
startpoint.xrecoff % XLogSegSize);
fp = AllocateFile(histfilename, "w");
if (!fp)
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not create file \"%s\": %m",
histfilename)));
fprintf(fp, "START WAL LOCATION: %X/%X (file %s)\n",
startpoint.xlogid, startpoint.xrecoff, startxlogfilename);
fprintf(fp, "STOP WAL LOCATION: %X/%X (file %s)\n",
stoppoint.xlogid, stoppoint.xrecoff, stopxlogfilename);
/* transfer start time and label lines from label to history file */
while ((ich = fgetc(lfp)) != EOF)
fputc(ich, fp);
fprintf(fp, "STOP TIME: %s\n", strfbuf);
if (fflush(fp) || ferror(fp) || FreeFile(fp))
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not write file \"%s\": %m",
histfilename)));
/*
* Close and remove the backup label file
*/
if (ferror(lfp) || FreeFile(lfp))
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not read file \"%s\": %m",
labelfilename)));
if (unlink(labelfilename) != 0)
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not remove file \"%s\": %m",
labelfilename)));
/*
* Notify archiver that history file may be archived immediately
*/
if (XLogArchivingActive())
{
BackupHistoryFileName(histfilename, ThisTimeLineID, _logId, _logSeg,
startpoint.xrecoff % XLogSegSize);
XLogArchiveNotify(histfilename);
}
/*
* We're done. As a convenience, return the ending WAL offset.
*/
snprintf(stopxlogfilename, sizeof(stopxlogfilename), "%X/%X",
stoppoint.xlogid, stoppoint.xrecoff);
result = DatumGetTextP(DirectFunctionCall1(textin,
CStringGetDatum(stopxlogfilename)));
PG_RETURN_TEXT_P(result);
}

View File

@ -19,7 +19,7 @@
*
*
* IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/postmaster/pgarch.c,v 1.3 2004/08/01 17:45:43 tgl Exp $
* $PostgreSQL: pgsql/src/backend/postmaster/pgarch.c,v 1.4 2004/08/03 20:32:33 tgl Exp $
*
*-------------------------------------------------------------------------
*/
@ -64,8 +64,8 @@
* ----------
*/
#define MIN_XFN_CHARS 16
#define MAX_XFN_CHARS 24
#define VALID_XFN_CHARS "0123456789ABCDEF.history"
#define MAX_XFN_CHARS 40
#define VALID_XFN_CHARS "0123456789ABCDEF.history.backup"
#define NUM_ARCHIVE_RETRIES 3

View File

@ -8,7 +8,7 @@
*
*
* IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/utils/adt/misc.c,v 1.35 2004/07/02 18:59:22 joe Exp $
* $PostgreSQL: pgsql/src/backend/utils/adt/misc.c,v 1.36 2004/08/03 20:32:33 tgl Exp $
*
*-------------------------------------------------------------------------
*/
@ -27,6 +27,8 @@
#include "catalog/pg_type.h"
#include "catalog/pg_tablespace.h"
#define atooid(x) ((Oid) strtoul((x), NULL, 10))
/*
* Check if data is Null
@ -67,8 +69,7 @@ current_database(PG_FUNCTION_ARGS)
/*
* Functions to terminate a backend or cancel a query running on
* a different backend.
* Functions to send signals to other backends.
*/
static int pg_signal_backend(int pid, int sig)
@ -76,14 +77,16 @@ static int pg_signal_backend(int pid, int sig)
if (!superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
(errmsg("only superuser can signal other backends"))));
(errmsg("must be superuser to signal other server processes"))));
if (!IsBackendPid(pid))
{
/* This is just a warning so a loop-through-resultset will not abort
* if one backend terminated on it's own during the run */
/*
* This is just a warning so a loop-through-resultset will not abort
* if one backend terminated on it's own during the run
*/
ereport(WARNING,
(errmsg("pid %i is not a postgresql backend",pid)));
(errmsg("PID %d is not a PostgreSQL server process", pid)));
return 0;
}
@ -91,24 +94,32 @@ static int pg_signal_backend(int pid, int sig)
{
/* Again, just a warning to allow loops */
ereport(WARNING,
(errmsg("failed to send signal to backend %i: %m",pid)));
(errmsg("could not send signal to process %d: %m",pid)));
return 0;
}
return 1;
}
Datum
pg_terminate_backend(PG_FUNCTION_ARGS)
{
PG_RETURN_INT32(pg_signal_backend(PG_GETARG_INT32(0),SIGTERM));
}
Datum
pg_cancel_backend(PG_FUNCTION_ARGS)
{
PG_RETURN_INT32(pg_signal_backend(PG_GETARG_INT32(0),SIGINT));
}
#ifdef NOT_USED
/* Disabled in 8.0 due to reliability concerns; FIXME someday */
Datum
pg_terminate_backend(PG_FUNCTION_ARGS)
{
PG_RETURN_INT32(pg_signal_backend(PG_GETARG_INT32(0),SIGTERM));
}
#endif
/* Function to find out which databases make use of a tablespace */
typedef struct
{
@ -140,9 +151,8 @@ Datum pg_tablespace_databases(PG_FUNCTION_ARGS)
if (tablespaceOid == GLOBALTABLESPACE_OID)
{
fctx->dirdesc = NULL;
ereport(NOTICE,
(errcode(ERRCODE_WARNING),
errmsg("global tablespace never has databases.")));
ereport(WARNING,
(errmsg("global tablespace never has databases")));
}
else
{
@ -154,10 +164,17 @@ Datum pg_tablespace_databases(PG_FUNCTION_ARGS)
fctx->dirdesc = AllocateDir(fctx->location);
if (!fctx->dirdesc) /* not a tablespace */
ereport(NOTICE,
(errcode(ERRCODE_WARNING),
errmsg("%d is no tablespace oid.", tablespaceOid)));
if (!fctx->dirdesc)
{
/* the only expected error is ENOENT */
if (errno != ENOENT)
ereport(ERROR,
(errcode_for_file_access(),
errmsg("could not open directory \"%s\": %m",
fctx->location)));
ereport(WARNING,
(errmsg("%u is not a tablespace oid", tablespaceOid)));
}
}
funcctx->user_fctx = fctx;
MemoryContextSwitchTo(oldcontext);
@ -174,27 +191,30 @@ Datum pg_tablespace_databases(PG_FUNCTION_ARGS)
char *subdir;
DIR *dirdesc;
Oid datOid = atol(de->d_name);
Oid datOid = atooid(de->d_name);
/* this test skips . and .., but is awfully weak */
if (!datOid)
continue;
/* if database subdir is empty, don't report tablespace as used */
/* size = path length + dir sep char + file name + terminator */
subdir = palloc(strlen(fctx->location) + 1 + strlen(de->d_name) + 1);
sprintf(subdir, "%s/%s", fctx->location, de->d_name);
dirdesc = AllocateDir(subdir);
if (dirdesc)
{
while ((de = readdir(dirdesc)) != 0)
{
if (strcmp(de->d_name, ".") && strcmp(de->d_name, ".."))
break;
}
pfree(subdir);
FreeDir(dirdesc);
pfree(subdir);
if (!dirdesc)
continue; /* XXX more sloppiness */
if (!de) /* database subdir is empty; don't report tablespace as used */
continue;
while ((de = readdir(dirdesc)) != 0)
{
if (strcmp(de->d_name, ".") != 0 && strcmp(de->d_name, "..") != 0)
break;
}
FreeDir(dirdesc);
if (!de)
continue; /* indeed, nothing in it */
SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(datOid));
}

View File

@ -11,12 +11,13 @@
* Portions Copyright (c) 1996-2003, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* $PostgreSQL: pgsql/src/include/access/xlog_internal.h,v 1.1 2004/07/21 22:31:25 tgl Exp $
* $PostgreSQL: pgsql/src/include/access/xlog_internal.h,v 1.2 2004/08/03 20:32:34 tgl Exp $
*/
#ifndef XLOG_INTERNAL_H
#define XLOG_INTERNAL_H
#include "access/xlog.h"
#include "fmgr.h"
#include "storage/block.h"
#include "storage/relfilenode.h"
@ -177,7 +178,7 @@ typedef XLogLongPageHeaderData *XLogLongPageHeader;
* These macros encapsulate knowledge about the exact layout of XLog file
* names, timeline history file names, and archive-status file names.
*/
#define MAXFNAMELEN 32
#define MAXFNAMELEN 64
#define XLogFileName(fname, tli, log, seg) \
snprintf(fname, MAXFNAMELEN, "%08X%08X%08X", tli, log, seg)
@ -194,6 +195,12 @@ typedef XLogLongPageHeaderData *XLogLongPageHeader;
#define StatusFilePath(path, xlog, suffix) \
snprintf(path, MAXPGPATH, "%s/archive_status/%s%s", XLogDir, xlog, suffix)
#define BackupHistoryFileName(fname, tli, log, seg, offset) \
snprintf(fname, MAXFNAMELEN, "%08X%08X%08X.%08X.backup", tli, log, seg, offset)
#define BackupHistoryFilePath(path, tli, log, seg, offset) \
snprintf(path, MAXPGPATH, "%s/%08X%08X%08X.%08X.backup", XLogDir, tli, log, seg, offset)
extern char XLogDir[MAXPGPATH];
/*
@ -221,4 +228,10 @@ typedef struct RmgrData
extern const RmgrData RmgrTable[];
/*
* These aren't in xlog.h because I'd rather not include fmgr.h there.
*/
extern Datum pg_start_backup(PG_FUNCTION_ARGS);
extern Datum pg_stop_backup(PG_FUNCTION_ARGS);
#endif /* XLOG_INTERNAL_H */

View File

@ -37,7 +37,7 @@
* Portions Copyright (c) 1996-2003, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* $PostgreSQL: pgsql/src/include/catalog/catversion.h,v 1.247 2004/07/21 20:43:53 momjian Exp $
* $PostgreSQL: pgsql/src/include/catalog/catversion.h,v 1.248 2004/08/03 20:32:35 tgl Exp $
*
*-------------------------------------------------------------------------
*/
@ -53,6 +53,6 @@
*/
/* yyyymmddN */
#define CATALOG_VERSION_NO 200407211
#define CATALOG_VERSION_NO 200408031
#endif

View File

@ -7,7 +7,7 @@
* Portions Copyright (c) 1996-2003, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* $PostgreSQL: pgsql/src/include/catalog/pg_proc.h,v 1.342 2004/07/12 20:23:53 momjian Exp $
* $PostgreSQL: pgsql/src/include/catalog/pg_proc.h,v 1.343 2004/08/03 20:32:35 tgl Exp $
*
* NOTES
* The script catalog/genbki.sh reads this file and generates .bki
@ -2815,11 +2815,6 @@ DESCR("Statistics: Blocks fetched for database");
DATA(insert OID = 1945 ( pg_stat_get_db_blocks_hit PGNSP PGUID 12 f f t f s 1 20 "26" _null_ pg_stat_get_db_blocks_hit - _null_ ));
DESCR("Statistics: Blocks found in cache for database");
DATA(insert OID = 2171 ( pg_terminate_backend PGNSP PGUID 12 f f t f s 1 23 "23" _null_ pg_terminate_backend - _null_ ));
DESCR("Terminate a backend process");
DATA(insert OID = 2172 ( pg_cancel_backend PGNSP PGUID 12 f f t f s 1 23 "23" _null_ pg_cancel_backend - _null_ ));
DESCR("Cancel running query on a backend process");
DATA(insert OID = 1946 ( encode PGNSP PGUID 12 f f t f i 2 25 "17 25" _null_ binary_encode - _null_ ));
DESCR("Convert bytea value into some ascii-only text string");
DATA(insert OID = 1947 ( decode PGNSP PGUID 12 f f t f i 2 17 "25 25" _null_ binary_decode - _null_ ));
@ -2993,10 +2988,18 @@ DATA(insert OID = 2082 ( pg_operator_is_visible PGNSP PGUID 12 f f t f s 1 16 "
DESCR("is operator visible in search path?");
DATA(insert OID = 2083 ( pg_opclass_is_visible PGNSP PGUID 12 f f t f s 1 16 "26" _null_ pg_opclass_is_visible - _null_ ));
DESCR("is opclass visible in search path?");
DATA(insert OID = 2093 ( pg_conversion_is_visible PGNSP PGUID 12 f f t f s 1 16 "26" _null_ pg_conversion_is_visible - _null_ ));
DATA(insert OID = 2093 ( pg_conversion_is_visible PGNSP PGUID 12 f f t f s 1 16 "26" _null_ pg_conversion_is_visible - _null_ ));
DESCR("is conversion visible in search path?");
DATA(insert OID = 2171 ( pg_cancel_backend PGNSP PGUID 12 f f t f v 1 23 "23" _null_ pg_cancel_backend - _null_ ));
DESCR("Cancel a server process' current query");
DATA(insert OID = 2172 ( pg_start_backup PGNSP PGUID 12 f f t f v 1 25 "25" _null_ pg_start_backup - _null_ ));
DESCR("Prepare for taking an online backup");
DATA(insert OID = 2173 ( pg_stop_backup PGNSP PGUID 12 f f t f v 0 25 "" _null_ pg_stop_backup - _null_ ));
DESCR("Finish taking an online backup");
/* Aggregates (moved here from pg_aggregate for 7.3) */
DATA(insert OID = 2100 ( avg PGNSP PGUID 12 t f f f i 1 1700 "20" _null_ aggregate_dummy - _null_ ));

View File

@ -7,7 +7,7 @@
* Portions Copyright (c) 1996-2003, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* $PostgreSQL: pgsql/src/include/utils/builtins.h,v 1.246 2004/07/12 20:23:59 momjian Exp $
* $PostgreSQL: pgsql/src/include/utils/builtins.h,v 1.247 2004/08/03 20:32:36 tgl Exp $
*
*-------------------------------------------------------------------------
*/
@ -360,7 +360,6 @@ extern Datum float84ge(PG_FUNCTION_ARGS);
extern Datum nullvalue(PG_FUNCTION_ARGS);
extern Datum nonnullvalue(PG_FUNCTION_ARGS);
extern Datum current_database(PG_FUNCTION_ARGS);
extern Datum pg_terminate_backend(PG_FUNCTION_ARGS);
extern Datum pg_cancel_backend(PG_FUNCTION_ARGS);
extern Datum pg_tablespace_databases(PG_FUNCTION_ARGS);