postgresql/contrib/pg_dumplo
Bruce Momjian 1dad866bb4 Someone report me small bug in contrib/pg_dumplo today. It's problem
with a little dirty snprintf() usage which I used some years ago:

    snprintf(path, BUFSIZ, "%s/lo_dump.index", path);

Karel Zak
2003-10-09 13:38:05 +00:00
..
lo_export.c Remove compile warnings, ensure consistent build environment for 2002-09-05 21:01:16 +00:00
lo_import.c Remove compile warnings, ensure consistent build environment for 2002-09-05 21:01:16 +00:00
main.c Don't assume that struct option is available just because we can find a 2003-08-07 21:11:58 +00:00
Makefile To fix the perpetually broken makefiles in the contrib tree, I have 2001-09-06 10:49:30 +00:00
pg_dumplo.h Update for new we-always-have-getopt_long approach. 2003-01-07 21:42:38 +00:00
README.pg_dumplo Code review: minor cleanups, make the world safe for unsigned OIDs. 2000-11-22 00:00:55 +00:00
utils.c Someone report me small bug in contrib/pg_dumplo today. It's problem 2003-10-09 13:38:05 +00:00

$Header: /cvsroot/pgsql/contrib/pg_dumplo/Attic/README.pg_dumplo,v 1.2 2000/11/22 00:00:55 tgl Exp $

pg_dumplo - PostgreSQL large object dumper
==========================================        

By Karel Zak <zakkr@zf.jcu.cz>


Compilation:
===========

	* run master ./configure in the PG source top directory  
	* gmake all
	* gmake install
	
THANKS:
====== 

	<??? I lost his e-mail ???>
		* option '--all' and pg_class usage

	Pavel Janík ml. <Pavel.Janik@linux.cz>
		* HOWTO (the rest of this file)


How to use pg_dumplo?
=====================

(c) 2000, Pavel Janík ml. <Pavel.Janik@linux.cz>


Q: How do you use pg_dumplo?
============================

A: This is a small demo of backing up the database table with Large Objects:


We will create a demo database and a small and useless table `lo' inside
it:

SnowWhite:$ createdb test
CREATE DATABASE

Ok, our database with the name 'test' is created. Now we should create demo
table which will contain only one column with the name 'id' which will hold
the OID number of a Large Object:

SnowWhite:$ psql test
Welcome to psql, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
       \h for help with SQL commands
       \? for help on internal slash commands
       \g or terminate with semicolon to execute query
       \q to quit

test=# CREATE TABLE lo (id oid);
CREATE
test=# \lo_import /etc/aliases
lo_import 19338
test=# INSERT INTO lo VALUES (19338);
INSERT 19352 1
test=# select * from lo;
  id   
-------
 19338
(1 row)

test=# \q

In the above example you can see that we have also imported one "Large
Object" - the file /etc/aliases. It has an oid of 19338 so we have inserted
this oid number to the database table lo to the column id. The final SELECT
shows that we have one record in the table.

Now we can demonstrate the work of pg_dumplo. We will create a dump directory
which will contain the whole dump of large objects (/tmp/dump):

mkdir -p /tmp/dump

Now we can dump all large objects from the database `test' which have OIDs
stored in the column `id' in the table `lo':

SnowWhite:$ pg_dumplo -s /tmp/dump -d test -l lo.id
pg_dumplo: dump lo.id (1 large obj)

Voila, we have the dump of all Large Objects in our directory:

SnowWhite:$ tree /tmp/dump/
/tmp/dump/
`-- test
    |-- lo
    |   `-- id
    |       `-- 19338
    `-- lo_dump.index

3 directories, 2 files
SnowWhite:$ 

In practice, we'd probably use

SnowWhite:$ pg_dumplo -s /tmp/dump -d test -e

to export all large objects that are referenced by any OID-type column
in the database.  Calling out specific column(s) with -l is only needed
for a selective dump.

For routine backup purposes, the dump directory could now be converted into
an archive file with tar and stored on tape.  Notice that a single dump
directory can hold the dump of multiple databases.

Now, how can we recreate the contents of the table lo and the Large Object
database when something went wrong?  To do this, we expect that pg_dump is
also used to store the definition and contents of the regular tables in
the database.

SnowWhite:$ pg_dump test >test.backup

Now, if we lose the database:

SnowWhite:$ dropdb test
DROP DATABASE

we can recreate it and reload the regular tables from the dump file:

SnowWhite:$ createdb test
CREATE DATABASE

SnowWhite:$ psql test <test.backup

But at this point our database has no large objects in it.  What's more,
the large-object-referencing columns contain the OIDs of the old large
objects, which will not be the OIDs they'll have when reloaded.  Never
fear: pg_dumplo will fix the large object references at the same time
it reloads the large objects.  We reload the LO data from the dump
directory like this:

SnowWhite:$ pg_dumplo -s /tmp/dump -d test -i
19338	lo	id	test/lo/id/19338
SnowWhite:$

And this is everything.  The contents of table lo will be automatically
updated to refer to the new large object OIDs.

Summary: In this small example we have shown that pg_dumplo can be used to
completely dump the database's Large Objects very easily.

For more information see the help ( pg_dumplo -h ).