Remove contrib modules that have been migrated to pgfoundry: adddepend,

dbase, dbmirror, fulltextindex, mac, userlock; or abandoned: mSQL-interface,
tips.
This commit is contained in:
Tom Lane 2006-09-05 17:20:29 +00:00
parent a3242fb42c
commit af7d257e21
44 changed files with 6 additions and 7063 deletions

View File

@ -1,4 +1,4 @@
# $PostgreSQL: pgsql/contrib/Makefile,v 1.67 2006/09/04 15:07:46 petere Exp $
# $PostgreSQL: pgsql/contrib/Makefile,v 1.68 2006/09/05 17:20:26 tgl Exp $
subdir = contrib
top_builddir = ..
@ -9,11 +9,8 @@ WANTED_DIRS = \
btree_gist \
chkpass \
cube \
dbase \
dblink \
dbmirror \
earthdistance \
fulltextindex \
fuzzystrmatch \
intagg \
intarray \
@ -31,9 +28,7 @@ WANTED_DIRS = \
seg \
spi \
tablefunc \
tips \
tsearch2 \
userlock \
vacuumlo
ifeq ($(with_openssl),yes)
@ -41,9 +36,6 @@ WANTED_DIRS += sslinfo
endif
# Missing:
# adddepend \ (does not have a makefile)
# mSQL-interface \ (requires msql installed)
# mac \ (does not have a makefile)
# start-scripts \ (does not have a makefile)
# xml2 \ (requires libxml installed)

View File

@ -24,13 +24,9 @@ procedure.
Index:
------
adddepend -
Add object dependency information to pre-7.3 objects.
by Rod Taylor <rbt@rbt.ca>
adminpack -
File and log manipulation routines, used by pgAdmin
by From: Dave Page <dpage@vale-housing.co.uk>
by Dave Page <dpage@vale-housing.co.uk>
btree_gist -
Support for emulating BTREE indexing in GiST
@ -44,28 +40,14 @@ cube -
Multidimensional-cube datatype (GiST indexing example)
by Gene Selkov, Jr. <selkovjr@mcs.anl.gov>
dbase -
Converts from dbase/xbase to PostgreSQL
by Maarten.Boekhold <Maarten.Boekhold@reuters.com>,
Frank Koormann <fkoorman@usf.uni-osnabrueck.de>,
Ivan Baldo <lubaldo@adinet.com.uy>
dblink -
Allows remote query execution
by Joe Conway <mail@joeconway.com>
dbmirror -
Replication server
by Steven Singer <ssinger@navtechinc.com>
earthdistance -
Operator for computing earth distance for two points
by Hal Snyder <hal@vailsys.com>
fulltextindex -
Full text indexing using triggers
by Maarten Boekhold <maartenb@dutepp0.et.tudelft.nl>
fuzzystrmatch -
Levenshtein, metaphone, and soundex fuzzy string matching
by Joe Conway <mail@joeconway.com>, Joel Burton <jburton@scw.org>
@ -90,14 +72,6 @@ ltree -
Tree-like data structures
by Teodor Sigaev <teodor@sigaev.ru> and Oleg Bartunov <oleg@sai.msu.su>
mSQL-interface -
mSQL API translation library
by Aldrin Leal <aldrin@americasnet.com>
mac -
Support functions for MAC address types
by Lawrence E. Rosenman <ler@lerctr.org>
oid2name -
Maps numeric files to table names
by B Palmer <bpalmer@crimelabs.net>
@ -139,6 +113,10 @@ seg -
spi -
Various trigger functions, examples for using SPI.
sslinfo -
Functions to get information about SSL certificates
by Victor Wagner <vitus@cryptocom.ru>
start-scripts -
Scripts for starting the server at boot time.
@ -146,19 +124,11 @@ tablefunc -
Examples of functions returning tables
by Joe Conway <mail@joeconway.com>
tips -
Getting Apache to log to PostgreSQL
by Terry Mackintosh <terry@terrym.com>
tsearch2 -
Full-text-index support using GiST
by Teodor Sigaev <teodor@sigaev.ru> and Oleg Bartunov
<oleg@sai.msu.su>.
userlock -
User locks
by Massimo Dal Zotto <dz@cs.unitn.it>
vacuumlo -
Remove orphaned large objects
by Peter T Mount <peter@retep.org.uk>

View File

@ -1,45 +0,0 @@
Dependency Additions For PostgreSQL 7.3 Upgrades
In PostgreSQL releases prior to 7.3, certain database objects didn't
have proper dependencies. For example:
1) When you created a table with a SERIAL column, there was no linkage
to its underlying sequence. If you dropped the table with the SERIAL
column, the sequence was not automatically dropped.
2) When you created a foreign key, it created three triggers. If you
wanted to drop the foreign key, you had to drop the three triggers
individually.
3) When you created a column with constraint UNIQUE, a unique index was
created but there was no indication that the index was created as a
UNIQUE column constraint.
Fortunately, PostgreSQL 7.3 and later now tracks such dependencies
and handles these cases. Unfortunately, PostgreSQL dumps from prior
releases don't contain such dependency information.
This script operates on >= 7.3 databases and adds dependency information
for the objects listed above. It prompts the user on whether to create
a linkage for each object. You can use the -Y option to prevent such
prompting and have it generate all possible linkages.
This program requires the Pg:DBD Perl interface.
Usage:
adddepend [options] [dbname [username]]
Options:
-d <dbname> Specify database name to connect to (default: postgres)
-h <host> Specify database server host (default: localhost)
-p <port> Specify database server port (default: 5432)
-u <username> Specify database username (default: postgres)
--password=<pw> Specify database password (default: blank)
-Y The script normally asks whether the user wishes to apply
the conversion for each item found. This forces YES to all
questions.
Rod Taylor <pg@rbt.ca>

View File

@ -1,624 +0,0 @@
#!/usr/bin/perl
# $PostgreSQL: pgsql/contrib/adddepend/adddepend,v 1.6 2003/11/29 22:39:16 pgsql Exp $
# Project exists to assist PostgreSQL users with their structural upgrade
# from PostgreSQL 7.2 (or prior) to 7.3 or 7.4. Must be run against a 7.3 or 7.4
# database system (dump, upgrade daemon, restore, run this script)
#
# - Replace old style Foreign Keys with new style
# - Replace old SERIAL columns with new ones
# - Replace old style Unique Indexes with new style Unique Constraints
# License
# -------
# Copyright (c) 2001, Rod Taylor
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
#
# 3. Neither the name of the InQuent Technologies Inc. nor the names
# of its contributors may be used to endorse or promote products
# derived from this software without specific prior written
# permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FREEBSD
# PROJECT OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
use DBI;
use strict;
# Fetch the connection information from the local environment
my $dbuser = $ENV{'PGUSER'};
$dbuser ||= $ENV{'USER'};
my $database = $ENV{'PGDATABASE'};
$database ||= $dbuser;
my $dbisset = 0;
my $dbhost = $ENV{'PGHOST'};
$dbhost ||= "";
my $dbport = $ENV{'PGPORT'};
$dbport ||= "";
my $dbpass = "";
# Yes to all?
my $yes = 0;
# Whats the name of the binary?
my $basename = $0;
$basename =~ s|.*/([^/]+)$|$1|;
## Process user supplied arguments.
for( my $i=0; $i <= $#ARGV; $i++ ) {
ARGPARSE: for ( $ARGV[$i] ) {
/^-d$/ && do { $database = $ARGV[++$i];
$dbisset = 1;
last;
};
/^-[uU]$/ && do { $dbuser = $ARGV[++$i];
if (! $dbisset) {
$database = $dbuser;
}
last;
};
/^-h$/ && do { $dbhost = $ARGV[++$i]; last; };
/^-p$/ && do { $dbport = $ARGV[++$i]; last; };
/^--password=/ && do { $dbpass = $ARGV[$i];
$dbpass =~ s/^--password=//g;
last;
};
/^-Y$/ && do { $yes = 1; last; };
/^-\?$/ && do { usage(); last; };
/^--help$/ && do { usage(); last; };
}
}
# If no arguments were set, then tell them about usage
if ($#ARGV <= 0) {
print <<MSG
No arguments set. Use '$basename --help' for help
Connecting to database '$database' as user '$dbuser'
MSG
;
}
my $dsn = "dbi:Pg:dbname=$database";
$dsn .= ";host=$dbhost" if ( "$dbhost" ne "" );
$dsn .= ";port=$dbport" if ( "$dbport" ne "" );
# Database Connection
# -------------------
my $dbh = DBI->connect($dsn, $dbuser, $dbpass);
# We want to control commits
$dbh->{'AutoCommit'} = 0;
# PostgreSQL's version is used to determine what queries are required
# to retrieve a given information set.
my $sql_GetVersion = qq{
SELECT cast(substr(version(), 12, 1) as integer) * 10000
+ cast(substr(version(), 14, 1) as integer) * 100
as version;
};
my $sth_GetVersion = $dbh->prepare($sql_GetVersion);
$sth_GetVersion->execute();
my $version = $sth_GetVersion->fetchrow_hashref;
my $pgversion = $version->{'version'};
# control where things get created
my $sql = qq{
SET search_path = public;
};
my $sth = $dbh->prepare($sql);
$sth->execute();
END {
$dbh->disconnect() if $dbh;
}
findUniqueConstraints();
findSerials();
findForeignKeys();
# Find old style Foreign Keys based on:
#
# - Group of 3 triggers of the appropriate types
# -
sub findForeignKeys
{
my $sql = qq{
SELECT tgargs
, tgnargs
FROM pg_trigger
WHERE NOT EXISTS (SELECT *
FROM pg_depend
JOIN pg_constraint as c ON (refobjid = c.oid)
WHERE objid = pg_trigger.oid
AND deptype = 'i'
AND contype = 'f'
)
GROUP BY tgargs
, tgnargs
HAVING count(*) = 3;
};
my $sth = $dbh->prepare($sql);
$sth->execute() || triggerError($!);
while (my $row = $sth->fetchrow_hashref)
{
# Fetch vars
my $fkeynargs = $row->{'tgnargs'};
my $fkeyargs = $row->{'tgargs'};
my $matchtype = "MATCH SIMPLE";
my $updatetype = "";
my $deletetype = "";
if ($fkeynargs % 2 == 0 && $fkeynargs >= 6) {
my ( $keyname
, $table
, $ftable
, $unspecified
, $lcolumn_name
, $fcolumn_name
, @junk
) = split(/\000/, $fkeyargs);
# Account for old versions which don't seem to handle NULL
# but instead return a string. Newer DBI::Pg drivers
# don't have this problem
if (!defined($ftable)) {
( $keyname
, $table
, $ftable
, $unspecified
, $lcolumn_name
, $fcolumn_name
, @junk
) = split(/\\000/, $fkeyargs);
}
else
{
# Clean up the string for further manipulation. DBD doesn't deal well with
# strings with NULLs in them
$fkeyargs =~ s|\000|\\000|g;
}
# Catch and record MATCH FULL
if ($unspecified eq "FULL")
{
$matchtype = "MATCH FULL";
}
# Start off our column lists
my $key_cols = "\"$lcolumn_name\"";
my $ref_cols = "\"$fcolumn_name\"";
# Perhaps there is more than a single column
while ($lcolumn_name = shift(@junk) and $fcolumn_name = shift(@junk)) {
$key_cols .= ", \"$lcolumn_name\"";
$ref_cols .= ", \"$fcolumn_name\"";
}
my $trigsql = qq{
SELECT tgname
, relname
, proname
FROM pg_trigger
JOIN pg_proc ON (pg_proc.oid = tgfoid)
JOIN pg_class ON (pg_class.oid = tgrelid)
WHERE tgargs = ?;
};
my $tgsth = $dbh->prepare($trigsql);
$tgsth->execute($fkeyargs) || triggerError($!);
my $triglist = "";
while (my $tgrow = $tgsth->fetchrow_hashref)
{
my $trigname = $tgrow->{'tgname'};
my $tablename = $tgrow->{'relname'};
my $fname = $tgrow->{'proname'};
for ($fname)
{
/^RI_FKey_cascade_del$/ && do {$deletetype = "ON DELETE CASCADE"; last;};
/^RI_FKey_cascade_upd$/ && do {$updatetype = "ON UPDATE CASCADE"; last;};
/^RI_FKey_restrict_del$/ && do {$deletetype = "ON DELETE RESTRICT"; last;};
/^RI_FKey_restrict_upd$/ && do {$updatetype = "ON UPDATE RESTRICT"; last;};
/^RI_FKey_setnull_del$/ && do {$deletetype = "ON DELETE SET NULL"; last;};
/^RI_FKey_setnull_upd$/ && do {$updatetype = "ON UPDATE SET NULL"; last;};
/^RI_FKey_setdefault_del$/ && do {$deletetype = "ON DELETE SET DEFAULT"; last;};
/^RI_FKey_setdefault_upd$/ && do {$updatetype = "ON UPDATE SET DEFAULT"; last;};
/^RI_FKey_noaction_del$/ && do {$deletetype = "ON DELETE NO ACTION"; last;};
/^RI_FKey_noaction_upd$/ && do {$updatetype = "ON UPDATE NO ACTION"; last;};
}
$triglist .= " DROP TRIGGER \"$trigname\" ON \"$tablename\";\n";
}
my $constraint = "";
if ($keyname ne "<unnamed>")
{
$constraint = "CONSTRAINT \"$keyname\"";
}
my $fkey = qq{
$triglist
ALTER TABLE \"$table\" ADD $constraint FOREIGN KEY ($key_cols)
REFERENCES \"$ftable\"($ref_cols) $matchtype $updatetype $deletetype;
};
# Does the user want to upgrade this sequence?
print <<MSG
The below commands will upgrade the foreign key style. Shall I execute them?
$fkey
MSG
;
if (userConfirm())
{
my $sthfkey = $dbh->prepare($fkey);
$sthfkey->execute() || $dbh->rollback();
$dbh->commit() || $dbh->rollback();
}
}
}
}
# Find possible old style Serial columns based on:
#
# - Process unique constraints. Unique indexes without
# the corresponding entry in pg_constraint)
sub findUniqueConstraints
{
my $sql;
if ( $pgversion >= 70400 ) {
$sql = qq{
SELECT pg_index.*, quote_ident(ci.relname) AS index_name
, quote_ident(ct.relname) AS table_name
, pg_catalog.pg_get_indexdef(indexrelid) AS constraint_definition
, indclass
FROM pg_catalog.pg_class AS ci
JOIN pg_catalog.pg_index ON (ci.oid = indexrelid)
JOIN pg_catalog.pg_class AS ct ON (ct.oid = indrelid)
JOIN pg_catalog.pg_namespace ON (ct.relnamespace = pg_namespace.oid)
WHERE indisunique -- Unique indexes only
AND indpred IS NULL -- No Partial Indexes
AND indexprs IS NULL -- No expressional indexes
AND NOT EXISTS (SELECT TRUE
FROM pg_catalog.pg_depend
JOIN pg_catalog.pg_constraint
ON (refobjid = pg_constraint.oid)
WHERE objid = indexrelid
AND objsubid = 0)
AND nspname NOT IN ('pg_catalog', 'pg_toast');
};
}
else
{
$sql = qq{
SELECT pg_index.*, quote_ident(ci.relname) AS index_name
, quote_ident(ct.relname) AS table_name
, pg_catalog.pg_get_indexdef(indexrelid) AS constraint_definition
, indclass
FROM pg_catalog.pg_class AS ci
JOIN pg_catalog.pg_index ON (ci.oid = indexrelid)
JOIN pg_catalog.pg_class AS ct ON (ct.oid = indrelid)
JOIN pg_catalog.pg_namespace ON (ct.relnamespace = pg_namespace.oid)
WHERE indisunique -- Unique indexes only
AND indpred = '' -- No Partial Indexes
AND indproc = 0 -- No expressional indexes
AND NOT EXISTS (SELECT TRUE
FROM pg_catalog.pg_depend
JOIN pg_catalog.pg_constraint
ON (refobjid = pg_constraint.oid)
WHERE objid = indexrelid
AND objsubid = 0)
AND nspname NOT IN ('pg_catalog', 'pg_toast');
};
}
my $opclass_sql = qq{
SELECT TRUE
FROM pg_catalog.pg_opclass
JOIN pg_catalog.pg_am ON (opcamid = pg_am.oid)
WHERE amname = 'btree'
AND pg_opclass.oid = ?
AND pg_opclass.oid < 15000;
};
my $sth = $dbh->prepare($sql) || triggerError($!);
my $opclass_sth = $dbh->prepare($opclass_sql) || triggerError($!);
$sth->execute();
ITERATION:
while (my $row = $sth->fetchrow_hashref)
{
# Fetch vars
my $constraint_name = $row->{'index_name'};
my $table = $row->{'table_name'};
my $columns = $row->{'constraint_definition'};
# Test the opclass is BTree and was not added after installation
my @classes = split(/ /, $row->{'indclass'});
while (my $class = pop(@classes))
{
$opclass_sth->execute($class);
next ITERATION if ($sth->rows == 0);
}
# Extract the columns from the index definition
$columns =~ s|.*\(([^\)]+)\).*|$1|g;
$columns =~ s|([^\s]+)[^\s]+_ops|$1|g;
my $upsql = qq{
DROP INDEX $constraint_name RESTRICT;
ALTER TABLE $table ADD CONSTRAINT $constraint_name UNIQUE ($columns);
};
# Does the user want to upgrade this sequence?
print <<MSG
Upgrade the Unique Constraint style via:
$upsql
MSG
;
if (userConfirm())
{
# Drop the old index and create a new constraint by the same name
# to replace it.
my $upsth = $dbh->prepare($upsql);
$upsth->execute() || $dbh->rollback();
$dbh->commit() || $dbh->rollback();
}
}
}
# Find possible old style Serial columns based on:
#
# - Column is int or bigint
# - Column has a nextval() default
# - The sequence name includes the tablename, column name, and ends in _seq
# or includes the tablename and is 40 or more characters in length.
sub findSerials
{
my $sql = qq{
SELECT nspname AS nspname
, relname AS relname
, attname AS attname
, adsrc
FROM pg_catalog.pg_class as c
JOIN pg_catalog.pg_attribute as a
ON (c.oid = a.attrelid)
JOIN pg_catalog.pg_attrdef as ad
ON (a.attrelid = ad.adrelid
AND a.attnum = ad.adnum)
JOIN pg_catalog.pg_type as t
ON (t.typname IN ('int4', 'int8')
AND t.oid = a.atttypid)
JOIN pg_catalog.pg_namespace as n
ON (c.relnamespace = n.oid)
WHERE n.nspname = 'public'
AND adsrc LIKE 'nextval%'
AND adsrc LIKE '%'|| relname ||'_'|| attname ||'_seq%'
AND NOT EXISTS (SELECT *
FROM pg_catalog.pg_depend as sd
JOIN pg_catalog.pg_class as sc
ON (sc.oid = sd.objid)
WHERE sd.refobjid = a.attrelid
AND sd.refobjsubid = a.attnum
AND sd.objsubid = 0
AND deptype = 'i'
AND sc.relkind = 'S'
AND sc.relname = c.relname ||'_'|| a.attname || '_seq'
);
};
my $sth = $dbh->prepare($sql) || triggerError($!);
$sth->execute();
while (my $row = $sth->fetchrow_hashref)
{
# Fetch vars
my $table = $row->{'relname'};
my $column = $row->{'attname'};
my $seq = $row->{'adsrc'};
# Extract the sequence name from the default
$seq =~ s|^nextval\(["']+([^'"\)]+)["']+.*\)$|$1|g;
# Does the user want to upgrade this sequence?
print <<MSG
Do you wish to upgrade Sequence '$seq' to SERIAL?
Found on column $table.$column
MSG
;
if (userConfirm())
{
# Add the pg_depend entry for the serial column. Should be enough
# to fool pg_dump into recreating it properly next time. The default
# is still slightly different than a fresh serial, but close enough.
my $upsql = qq{
INSERT INTO pg_catalog.pg_depend
( classid
, objid
, objsubid
, refclassid
, refobjid
, refobjsubid
, deptype
) VALUES ( (SELECT c.oid -- classid
FROM pg_class as c
JOIN pg_namespace as n
ON (n.oid = c.relnamespace)
WHERE n.nspname = 'pg_catalog'
AND c.relname = 'pg_class')
, (SELECT c.oid -- objid
FROM pg_class as c
JOIN pg_namespace as n
ON (n.oid = c.relnamespace)
WHERE n.nspname = 'public'
AND c.relname = '$seq')
, 0 -- objsubid
, (SELECT c.oid -- refclassid
FROM pg_class as c
JOIN pg_namespace as n
ON (n.oid = c.relnamespace)
WHERE n.nspname = 'pg_catalog'
AND c.relname = 'pg_class')
, (SELECT c.oid -- refobjid
FROM pg_class as c
JOIN pg_namespace as n
ON (n.oid = c.relnamespace)
WHERE n.nspname = 'public'
AND c.relname = '$table')
, (SELECT a.attnum -- refobjsubid
FROM pg_class as c
JOIN pg_namespace as n
ON (n.oid = c.relnamespace)
JOIN pg_attribute as a
ON (a.attrelid = c.oid)
WHERE n.nspname = 'public'
AND c.relname = '$table'
AND a.attname = '$column')
, 'i' -- deptype
);
};
my $upsth = $dbh->prepare($upsql);
$upsth->execute() || $dbh->rollback();
$dbh->commit() || $dbh->rollback();
}
}
}
#######
# userConfirm
# Wait for a key press
sub userConfirm
{
my $ret = 0;
my $key = "";
# Sleep for key unless -Y was used
if ($yes == 1)
{
$ret = 1;
$key = 'Y';
}
# Wait for a keypress
while ($key eq "")
{
print "\n << 'Y'es or 'N'o >> : ";
$key = <STDIN>;
chomp $key;
# If it's not a Y or N, then ask again
$key =~ s/[^YyNn]//g;
}
if ($key =~ /[Yy]/)
{
$ret = 1;
}
return $ret;
}
#######
# triggerError
# Exit nicely, but print a message as we go about an error
sub triggerError
{
my $msg = shift;
# Set a default message if one wasn't supplied
if (!defined($msg))
{
$msg = "Unknown error";
}
print $msg;
exit 1;
}
#######
# usage
# Script usage
sub usage
{
print <<USAGE
Usage:
$basename [options] [dbname [username]]
Options:
-d <dbname> Specify database name to connect to (default: $database)
-h <host> Specify database server host (default: localhost)
-p <port> Specify database server port (default: 5432)
-u <username> Specify database username (default: $dbuser)
--password=<pw> Specify database password (default: blank)
-Y The script normally asks whether the user wishes to apply
the conversion for each item found. This forces YES to all
questions.
USAGE
;
exit 0;
}

View File

@ -1,26 +0,0 @@
# $PostgreSQL: pgsql/contrib/dbase/Makefile,v 1.8 2005/09/27 17:13:01 tgl Exp $
PROGRAM = dbf2pg
OBJS = dbf.o dbf2pg.o endian.o
PG_CPPFLAGS = -I$(libpq_srcdir)
PG_LIBS = $(libpq_pgport)
# Uncomment this to provide charset translation
#PG_CPPFLAGS += -DHAVE_ICONV_H
# You might need to uncomment this too, if libiconv is a separate
# library on your platform
#PG_LIBS += -liconv
DOCS = README.dbf2pg
MAN = dbf2pg.1 # XXX not implemented
ifdef USE_PGXS
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
else
subdir = contrib/dbase
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif

View File

@ -1,139 +0,0 @@
dbf2sql(1L) dbf2sql(1L)
NAME
dbf2sql - Insert xBase-style .dbf-files into a Post-
greSQL-table
SYNOPSIS
"dbf2pg [options] dbf-file"
Options:
[-v[v]] [-f] [-u | -l] [-c | -D] [-d database] [-t table]
[-h host] [-s oldname=[newname][,oldname=[newname]]] [-b
start] [-e end] [-W] [-U username] [-B transaction_size]
[-F charset_from [-T charset_to]]
DESCRIPTION
This manual page documents the program dbf2pg. It takes
an xBase-style .dbf-file, and inserts it into the speci-
fied database and table.
OPTIONS
-v Display some status-messages.
-vv Also display progress.
-f Convert all field-names from the .dbf-file to low-
ercase.
-u Convert the contents of all fields to uppercase.
-l Convert the contents of all fields to lowercase.
-c Create the table specified with -t. If this table
already exists, first DROP it.
-D Delete the contents of the table specified with -t.
Note that this table has to exists. An error is
returned if this is not the case.
-W Ask for password.
-d database
Specify the database to use. An error is returned
if this database does not exists. Default is
"test".
-t table
Specify the table to insert in. An error is
returned if this table does not exists. Default is
"test".
-h host
Specify the host to which to connect. Default is
"localhost".
1
dbf2sql(1L) dbf2sql(1L)
-s oldname=[newname][,oldname=[newname]]
Change the name of a field from oldname to newname.
This is mainly used to avoid using reserved SQL-
keywords. When the new fieldname is empty, the field
is skipped in both the CREATE-clause and the
INSERT-clauses, in common words: it will not be present
in the SQL-table.
Example:
-s SELECT=SEL,remark=,COMMIT=doit
This is done before the -f operator has taken
effect!
-b start
Specify the first record-number in the xBase-file
we will insert.
-e end Specify the last record-number in the xBase-file we
will insert.
-B transaction_size
Specify the number of records per transaction,
default is all records.
-U username
Log as the specified user in the database.
-F charset_from
If specified, it converts the data from the speci-
fied charset. Example:
-F IBM437
Consult your system documentation to see the con-
versions available. This requires iconv to be enabled
in the compile.
-T charset_to
Together with -F charset_from , it converts the
data to the specified charset. Default is
"ISO-8859-1". This requires iconv to be enabled
in the compile.
ENVIRONMENT
This program is affected by the environment-variables as
used by "PostgresSQL." See the documentation of Post-
gresSQL for more info. This program can optionally use iconv
character set conversion routines.
BUGS
Fields larger than 8192 characters are not supported and
could break the program.
Some charset convertions could cause the output to be
larger than the input and could break the program.
2

View File

@ -1,520 +0,0 @@
/* $PostgreSQL: pgsql/contrib/dbase/dbf.c,v 1.11 2006/06/08 03:28:01 momjian Exp $ */
/* Routines to read and write xBase-files (.dbf)
By Maarten Boekhold, 29th of oktober 1995
Modified by Frank Koormann (fkoorman@usf.uni-osnabrueck.de), Jun 10 1996
prepare dataarea with memset
get systemtime and set filedate
set formatstring for real numbers
*/
#include "postgres_fe.h"
#include <fcntl.h>
#include <ctype.h>
#include <time.h>
#include <unistd.h>
#include "dbf.h"
/* open a dbf-file, get it's field-info and store this information */
dbhead *
dbf_open(char *file, int flags)
{
int file_no;
dbhead *dbh;
f_descr *fields;
dbf_header *head;
dbf_field *fieldc;
int t;
if ((dbh = (dbhead *) malloc(sizeof(dbhead))) == NULL)
return (dbhead *) DBF_ERROR;
if ((head = (dbf_header *) malloc(sizeof(dbf_header))) == NULL)
{
free(dbh);
return (dbhead *) DBF_ERROR;
}
if ((fieldc = (dbf_field *) malloc(sizeof(dbf_field))) == NULL)
{
free(head);
free(dbh);
return (dbhead *) DBF_ERROR;
}
if ((file_no = open(file, flags, 0)) == -1)
{
free(fieldc);
free(head);
free(dbh);
return (dbhead *) DBF_ERROR;
}
/* read in the disk-header */
if (read(file_no, head, sizeof(dbf_header)) == -1)
{
close(file_no);
free(fieldc);
free(head);
free(dbh);
return (dbhead *) DBF_ERROR;
}
if (!(head->dbh_dbt & DBH_NORMAL))
{
close(file_no);
free(fieldc);
free(head);
free(dbh);
return (dbhead *) DBF_ERROR;
}
dbh->db_fd = file_no;
if (head->dbh_dbt & DBH_MEMO)
dbh->db_memo = 1;
else
dbh->db_memo = 0;
dbh->db_year = head->dbh_year;
dbh->db_month = head->dbh_month;
dbh->db_day = head->dbh_day;
dbh->db_hlen = get_short((u_char *) &head->dbh_hlen);
dbh->db_records = get_long((u_char *) &head->dbh_records);
dbh->db_currec = 0;
dbh->db_rlen = get_short((u_char *) &head->dbh_rlen);
dbh->db_nfields = (dbh->db_hlen - sizeof(dbf_header)) / sizeof(dbf_field);
/*
* dbh->db_hlen - sizeof(dbf_header) isn't the correct size, cos dbh->hlen
* is in fact a little more cos of the 0x0D (and possibly another byte,
* 0x4E, I have seen this somewhere). Because of rounding everything turns
* out right :)
*/
if ((fields = (f_descr *) calloc(dbh->db_nfields, sizeof(f_descr)))
== NULL)
{
close(file_no);
free(fieldc);
free(head);
free(dbh);
return (dbhead *) DBF_ERROR;
}
for (t = 0; t < dbh->db_nfields; t++)
{
/* Maybe I have calculated the number of fields incorrectly. This can happen
when programs reserve lots of space at the end of the header for future
expansion. This will catch this situation */
if (fields[t].db_name[0] == 0x0D)
{
dbh->db_nfields = t;
break;
}
read(file_no, fieldc, sizeof(dbf_field));
strncpy(fields[t].db_name, fieldc->dbf_name, DBF_NAMELEN);
fields[t].db_type = fieldc->dbf_type;
fields[t].db_flen = fieldc->dbf_flen;
fields[t].db_dec = fieldc->dbf_dec;
}
dbh->db_offset = dbh->db_hlen;
dbh->db_fields = fields;
if ((dbh->db_buff = (u_char *) malloc(dbh->db_rlen)) == NULL)
return (dbhead *) DBF_ERROR;
free(fieldc);
free(head);
return dbh;
}
int
dbf_write_head(dbhead * dbh)
{
dbf_header head;
time_t now;
struct tm *dbf_time;
if (lseek(dbh->db_fd, 0, SEEK_SET) == -1)
return DBF_ERROR;
/* fill up the diskheader */
/* Set dataarea of head to '\0' */
memset(&head, '\0', sizeof(dbf_header));
head.dbh_dbt = DBH_NORMAL;
if (dbh->db_memo)
head.dbh_dbt = DBH_MEMO;
now = time((time_t *) NULL);
dbf_time = localtime(&now);
head.dbh_year = dbf_time->tm_year;
head.dbh_month = dbf_time->tm_mon + 1; /* Months since January + 1 */
head.dbh_day = dbf_time->tm_mday;
put_long(head.dbh_records, dbh->db_records);
put_short(head.dbh_hlen, dbh->db_hlen);
put_short(head.dbh_rlen, dbh->db_rlen);
if (write(dbh->db_fd, &head, sizeof(dbf_header)) != sizeof(dbf_header))
return DBF_ERROR;
return 0;
}
int
dbf_put_fields(dbhead * dbh)
{
dbf_field field;
u_long t;
u_char end = 0x0D;
if (lseek(dbh->db_fd, sizeof(dbf_header), SEEK_SET) == -1)
return DBF_ERROR;
/* Set dataarea of field to '\0' */
memset(&field, '\0', sizeof(dbf_field));
for (t = 0; t < dbh->db_nfields; t++)
{
strncpy(field.dbf_name, dbh->db_fields[t].db_name, DBF_NAMELEN - 1);
field.dbf_type = dbh->db_fields[t].db_type;
field.dbf_flen = dbh->db_fields[t].db_flen;
field.dbf_dec = dbh->db_fields[t].db_dec;
if (write(dbh->db_fd, &field, sizeof(dbf_field)) != sizeof(dbf_field))
return DBF_ERROR;
}
if (write(dbh->db_fd, &end, 1) != 1)
return DBF_ERROR;
return 0;
}
int
dbf_add_field(dbhead * dbh, char *name, u_char type,
u_char length, u_char dec)
{
f_descr *ptr;
u_char *foo;
u_long size,
field_no;
size = (dbh->db_nfields + 1) * sizeof(f_descr);
if (!(ptr = (f_descr *) realloc(dbh->db_fields, size)))
return DBF_ERROR;
dbh->db_fields = ptr;
field_no = dbh->db_nfields;
strncpy(dbh->db_fields[field_no].db_name, name, DBF_NAMELEN);
dbh->db_fields[field_no].db_type = type;
dbh->db_fields[field_no].db_flen = length;
dbh->db_fields[field_no].db_dec = dec;
dbh->db_nfields++;
dbh->db_hlen += sizeof(dbf_field);
dbh->db_rlen += length;
if (!(foo = (u_char *) realloc(dbh->db_buff, dbh->db_rlen)))
return DBF_ERROR;
dbh->db_buff = foo;
return 0;
}
dbhead *
dbf_open_new(char *name, int flags)
{
dbhead *dbh;
if (!(dbh = (dbhead *) malloc(sizeof(dbhead))))
return (dbhead *) DBF_ERROR;
if (flags & O_CREAT)
{
if ((dbh->db_fd = open(name, flags, DBF_FILE_MODE)) == -1)
{
free(dbh);
return (dbhead *) DBF_ERROR;
}
}
else
{
if ((dbh->db_fd = open(name, flags, 0)) == -1)
{
free(dbh);
return (dbhead *) DBF_ERROR;
}
}
dbh->db_offset = 0;
dbh->db_memo = 0;
dbh->db_year = 0;
dbh->db_month = 0;
dbh->db_day = 0;
dbh->db_hlen = sizeof(dbf_header) + 1;
dbh->db_records = 0;
dbh->db_currec = 0;
dbh->db_rlen = 1;
dbh->db_nfields = 0;
dbh->db_buff = NULL;
dbh->db_fields = (f_descr *) NULL;
return dbh;
}
void
dbf_close(dbhead * dbh)
{
int t;
close(dbh->db_fd);
for (t = 0; t < dbh->db_nfields; t++)
free(&dbh->db_fields[t]);
if (dbh->db_buff != NULL)
free(dbh->db_buff);
free(dbh);
}
int
dbf_get_record(dbhead * dbh, field * fields, u_long rec)
{
u_char *data;
int t,
i,
offset;
u_char *dbffield,
*end;
/* calculate at which offset we have to read. *DON'T* forget the
0x0D which seperates field-descriptions from records!
Note (april 5 1996): This turns out to be included in db_hlen
*/
offset = dbh->db_hlen + (rec * dbh->db_rlen);
if (lseek(dbh->db_fd, offset, SEEK_SET) == -1)
{
lseek(dbh->db_fd, 0, SEEK_SET);
dbh->db_offset = 0;
return DBF_ERROR;
}
dbh->db_offset = offset;
dbh->db_currec = rec;
data = dbh->db_buff;
read(dbh->db_fd, data, dbh->db_rlen);
if (data[0] == DBF_DELETED)
return DBF_DELETED;
dbffield = &data[1];
for (t = 0; t < dbh->db_nfields; t++)
{
strncpy(fields[t].db_name, dbh->db_fields[t].db_name, DBF_NAMELEN);
fields[t].db_type = dbh->db_fields[t].db_type;
fields[t].db_flen = dbh->db_fields[t].db_flen;
fields[t].db_dec = dbh->db_fields[t].db_dec;
if (fields[t].db_type == 'C')
{
end = &dbffield[fields[t].db_flen - 1];
i = fields[t].db_flen;
while (i > 0 && !isprint(*end))
{
end--;
i--;
}
strncpy((char *) fields[t].db_contents, (char *) dbffield, i);
fields[t].db_contents[i] = '\0';
}
else
{
end = dbffield;
i = fields[t].db_flen;
while (i > 0 && !isprint(*end))
{
end++;
i--;
}
strncpy((char *) fields[t].db_contents, (char *) end, i);
fields[t].db_contents[i] = '\0';
}
dbffield += fields[t].db_flen;
}
dbh->db_offset += dbh->db_rlen;
return DBF_VALID;
}
field *
dbf_build_record(dbhead * dbh)
{
int t;
field *fields;
if (!(fields = (field *) calloc(dbh->db_nfields, sizeof(field))))
return (field *) DBF_ERROR;
for (t = 0; t < dbh->db_nfields; t++)
{
if (!(fields[t].db_contents =
(u_char *) malloc(dbh->db_fields[t].db_flen + 1)))
{
for (t = 0; t < dbh->db_nfields; t++)
{
if (fields[t].db_contents != 0)
{
free(fields[t].db_contents);
free(fields);
}
return (field *) DBF_ERROR;
}
}
strncpy(fields[t].db_name, dbh->db_fields[t].db_name, DBF_NAMELEN);
fields[t].db_type = dbh->db_fields[t].db_type;
fields[t].db_flen = dbh->db_fields[t].db_flen;
fields[t].db_dec = dbh->db_fields[t].db_dec;
}
return fields;
}
void
dbf_free_record(dbhead * dbh, field * rec)
{
int t;
for (t = 0; t < dbh->db_nfields; t++)
free(rec[t].db_contents);
free(rec);
}
int
dbf_put_record(dbhead * dbh, field * rec, u_long where)
{
u_long offset,
new,
idx,
t,
h,
length;
u_char *data,
end = 0x1a;
double fl;
char foo[128],
format[32];
/* offset: offset in file for this record
new: real offset after lseek
idx: index to which place we are inside the 'hardcore'-data for this
record
t: field-counter
data: the hardcore-data that is put on disk
h: index into the field-part in the hardcore-data
length: length of the data to copy
fl: a float used to get the right precision with real numbers
foo: copy of db_contents when field is not 'C'
format: sprintf format-string to get the right precision with real numbers
NOTE: this declaration of 'foo' can cause overflow when the contents-field
is longer the 127 chars (which is highly unlikely, because it is not used
in text-fields).
*/
/* REMEMBER THAT THERE'S A 0x1A AT THE END OF THE FILE, SO DON'T
DO A SEEK_END WITH 0!!!!!! USE -1 !!!!!!!!!!
*/
if (where > dbh->db_records)
{
if ((new = lseek(dbh->db_fd, -1, SEEK_END)) == -1)
return DBF_ERROR;
dbh->db_records++;
}
else
{
offset = dbh->db_hlen + (where * dbh->db_rlen);
if ((new = lseek(dbh->db_fd, offset, SEEK_SET)) == -1)
return DBF_ERROR;
}
dbh->db_offset = new;
data = dbh->db_buff;
/* Set dataarea of data to ' ' (space) */
memset(data, ' ', dbh->db_rlen);
/* data[0] = DBF_VALID; */
idx = 1;
for (t = 0; t < dbh->db_nfields; t++)
{
/* if field is empty, don't do a thing */
if (rec[t].db_contents[0] != '\0')
{
/* Handle text */
if (rec[t].db_type == 'C')
{
if (strlen((char *) rec[t].db_contents) > rec[t].db_flen)
length = rec[t].db_flen;
else
length = strlen((char *) rec[t].db_contents);
strncpy((char *) data + idx, (char *) rec[t].db_contents,
length);
}
else
{
/* Handle the rest */
/* Numeric is special, because of real numbers */
if ((rec[t].db_type == 'N') && (rec[t].db_dec != 0))
{
fl = atof((char *) rec[t].db_contents);
snprintf(format, 32, "%%.%df", rec[t].db_dec);
snprintf(foo, 128, format, fl);
}
else
strncpy(foo, (char *) rec[t].db_contents, 128);
if (strlen(foo) > rec[t].db_flen)
length = rec[t].db_flen;
else
length = strlen(foo);
h = rec[t].db_flen - length;
strncpy((char *) (data + idx + h), foo, length);
}
}
idx += rec[t].db_flen;
}
if (write(dbh->db_fd, data, dbh->db_rlen) != dbh->db_rlen)
return DBF_ERROR;
/* There's a 0x1A at the end of a dbf-file */
if (where == dbh->db_records)
{
if (write(dbh->db_fd, &end, 1) != 1)
return DBF_ERROR;
}
dbh->db_offset += dbh->db_rlen;
return 0;
}

View File

@ -1,142 +0,0 @@
/* $PostgreSQL: pgsql/contrib/dbase/dbf.h,v 1.9 2006/03/11 04:38:28 momjian Exp $ */
/* header-file for dbf.c
declares routines for reading and writing xBase-files (.dbf), and
associated structures
Maarten Boekhold (maarten.boekhold@reuters.com) 29 oktober 1995
*/
#ifndef _DBF_H
#define _DBF_H
#ifdef _WIN32
#include <gmon.h> /* we need it to define u_char type */
#endif
#include <sys/types.h>
/**********************************************************************
The DBF-part
***********************************************************************/
#define DBF_FILE_MODE 0644
/* byte offsets for date in dbh_date */
#define DBH_DATE_YEAR 0
#define DBH_DATE_MONTH 1
#define DBH_DATE_DAY 2
/* maximum fieldname-length */
#define DBF_NAMELEN 11
/* magic-cookies for the file */
#define DBH_NORMAL 0x03
#define DBH_MEMO 0x83
/* magic-cookies for the fields */
#define DBF_ERROR -1
#define DBF_VALID 0x20
#define DBF_DELETED 0x2A
/* diskheader */
typedef struct
{
u_char dbh_dbt; /* indentification field */
u_char dbh_year; /* last modification-date */
u_char dbh_month;
u_char dbh_day;
u_char dbh_records[4]; /* number of records */
u_char dbh_hlen[2]; /* length of this header */
u_char dbh_rlen[2]; /* length of a record */
u_char dbh_stub[20]; /* misc stuff we don't need */
} dbf_header;
/* disk field-description */
typedef struct
{
char dbf_name[DBF_NAMELEN]; /* field-name terminated with \0 */
u_char dbf_type; /* field-type */
u_char dbf_reserved[4]; /* some reserved stuff */
u_char dbf_flen; /* field-length */
u_char dbf_dec; /* number of decimal positions if type is 'N' */
u_char dbf_stub[14]; /* stuff we don't need */
} dbf_field;
/* memory field-description */
typedef struct
{
char db_name[DBF_NAMELEN]; /* field-name terminated with \0 */
u_char db_type; /* field-type */
u_char db_flen; /* field-length */
u_char db_dec; /* number of decimal positions */
} f_descr;
/* memory dfb-header */
typedef struct
{
int db_fd; /* file-descriptor */
u_long db_offset; /* current offset in file */
u_char db_memo; /* memo-file present */
u_char db_year; /* last update as YYMMDD */
u_char db_month;
u_char db_day;
u_long db_hlen; /* length of the diskheader, for calculating
* the offsets */
u_long db_records; /* number of records */
u_long db_currec; /* current record-number starting at 0 */
u_short db_rlen; /* length of the record */
u_char db_nfields; /* number of fields */
u_char *db_buff; /* record-buffer to save malloc()'s */
f_descr *db_fields; /* pointer to an array of field- descriptions */
} dbhead;
/* structure that contains everything a user wants from a field, including
the contents (in ASCII). Warning! db_flen may be bigger than the actual
length of db_name! This is because a field doesn't have to be completely
filled */
typedef struct
{
char db_name[DBF_NAMELEN]; /* field-name terminated with \0 */
u_char db_type; /* field-type */
u_char db_flen; /* field-length */
u_char db_dec; /* number of decimal positions */
u_char *db_contents; /* contents of the field in ASCII */
} field;
/* prototypes for functions */
extern dbhead *dbf_open(char *file, int flags);
extern int dbf_write_head(dbhead * dbh);
extern int dbf_put_fields(dbhead * dbh);
extern int dbf_add_field(dbhead * dbh, char *name, u_char type,
u_char length, u_char dec);
extern dbhead *dbf_open_new(char *name, int flags);
extern void dbf_close(dbhead * dbh);
extern int dbf_get_record(dbhead * dbh, field * fields, u_long rec);
extern field *dbf_build_record(dbhead * dbh);
extern void dbf_free_record(dbhead * dbh, field * fields);
extern int dbf_put_record(dbhead * dbh, field * rec, u_long where);
/*********************************************************************
The endian-part
***********************************************************************/
extern long get_long(u_char *cp);
extern void put_long(u_char *cp, long lval);
extern short get_short(u_char *cp);
extern void put_short(u_char *cp, short lval);
#endif /* _DBF_H */

View File

@ -1,118 +0,0 @@
.\" $PostgreSQL: pgsql/contrib/dbase/dbf2pg.1,v 1.3 2006/03/11 04:38:28 momjian Exp $
.TH dbf2sql 1L \" -*- nroff -*-
.SH NAME
dbf2sql \- Insert xBase\-style .dbf\-files into a PostgreSQL\-table
.SH SYNOPSIS
.B dbf2pg [options] dbf-file
.br
.br
Options:
.br
[-v[v]] [-f] [-u | -l] [-c | -D] [-d database] [-t table]
[-h host] [-s oldname=[newname][,oldname=[newname]]]
[-b start] [-e end] [-W] [-U username] [-B transaction_size]
[-F charset_from [-T charset_to]]
.SH DESCRIPTION
This manual page documents the program
.BR dbf2pg.
It takes an xBase-style .dbf-file, and inserts it into the specified
database and table.
.SS OPTIONS
.TP
.I "\-v"
Display some status-messages.
.TP
.I "-vv"
Also display progress.
.TP
.I "-f"
Convert all field-names from the .dbf-file to lowercase.
.TP
.I "-u"
Convert the contents of all fields to uppercase.
.TP
.I "-l"
Convert the contents of all fields to lowercase.
.TP
.I "-c"
Create the table specified with
.IR \-t .
If this table already exists, first
.BR DROP
it.
.TP
.I "-D"
Delete the contents of the table specified with
.IR \-t .
Note that this table has to exists. An error is returned if this is not the
case.
.TP
.I "-W"
Ask for password.
.TP
.I "-d database"
Specify the database to use. An error is returned if this database does not
exists. Default is "test".
.TP
.I "-t table"
Specify the table to insert in. An error is returned if this table does not
exists. Default is "test".
.TP
.I "-h host"
Specify the host to which to connect. Default is "localhost".
.TP
.I "-s oldname=newname[,oldname=newname]"
Change the name of a field from
.BR oldname
to
.BR newname .
This is mainly used to avoid using reserved SQL-keywords. Example:
.br
.br
-s SELECT=SEL,COMMIT=doit
.br
.br
This is done
.BR before
the
.IR -f
operator has taken effect!
.TP
.I "-b start"
Specify the first record-number in the xBase-file we will insert.
.TP
.I "-e end"
Specify the last record-number in the xBase-file we will insert.
.TP
.I "-B transaction_size"
Specify the number of records per transaction, default is all records.
.TP
.I "-U username"
Log as the specified user in the database.
.TP
.I "-F charset_from"
If specified, it converts the data from the specified charset. Example:
.br
.br
-F IBM437
.br
.br
Consult your system documentation to see the convertions available.
.TP
.I "-T charset_to"
Together with
.I "-F charset_from"
, it converts the data to the specified charset. Default is "ISO-8859-1".
.SH ENVIRONMENT
This program is affected by the environment-variables as used
by
.B PostgresSQL.
See the documentation of PostgresSQL for more info.
.SH BUGS
Fields larger than 8192 characters are not supported and could break the
program.
.br
Some charset convertions could cause the output to be larger than the input
and could break the program.

View File

@ -1,839 +0,0 @@
/* $PostgreSQL: pgsql/contrib/dbase/dbf2pg.c,v 1.27 2006/03/11 04:38:28 momjian Exp $ */
/* This program reads in an xbase-dbf file and sends 'inserts' to an
PostgreSQL-server with the records in the xbase-file
M. Boekhold (maarten.boekhold@reuters.com) okt. 1995
oktober 1996: merged sources of dbf2msql.c and dbf2pg.c
oktober 1997: removed msql support
*/
#include "postgres_fe.h"
#include <fcntl.h>
#include <unistd.h>
#include <ctype.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
#ifdef HAVE_ICONV_H
#include <iconv.h>
#endif
#ifdef HAVE_GETOPT_H
#include <getopt.h>
#endif
#include "libpq-fe.h"
#include "dbf.h"
int verbose = 0,
upper = 0,
lower = 0,
create = 0,
fieldlow = 0;
int del = 0;
unsigned int begin = 0,
end = 0;
unsigned int t_block = 0;
#ifdef HAVE_ICONV_H
char *charset_from = NULL;
char *charset_to = "ISO-8859-1";
iconv_t iconv_d;
char convert_charset_buff[8192];
#endif
char *host = NULL;
char *dbase = "test";
char *table = "test";
char *username = NULL;
char *password = NULL;
char *subarg = NULL;
char escape_buff[8192];
void do_substitute(char *subarg, dbhead * dbh);
static inline void strtoupper(char *string);
static inline void strtolower(char *string);
void do_create(PGconn *, char *, dbhead *);
void do_inserts(PGconn *, char *, dbhead *);
int check_table(PGconn *, char *);
char *Escape_db(char *);
#ifdef HAVE_ICONV_H
char *convert_charset(char *string);
#endif
void usage(void);
static inline void
strtoupper(char *string)
{
while (*string != '\0')
{
*string = toupper((unsigned char) *string);
string++;
}
}
static inline void
strtolower(char *string)
{
while (*string != '\0')
{
*string = tolower((unsigned char) *string);
string++;
}
}
/* FIXME: should this check for overflow? */
char *
Escape_db(char *string)
{
char *foo,
*bar;
foo = escape_buff;
bar = string;
while (*bar != '\0')
{
if ((*bar == '\t') ||
(*bar == '\n') ||
(*bar == '\\'))
*foo++ = '\\';
*foo++ = *bar++;
}
*foo = '\0';
return escape_buff;
}
#ifdef HAVE_ICONV_H
char *
convert_charset(char *string)
{
size_t in_size,
out_size,
nconv;
char *in_ptr,
*out_ptr;
in_size = strlen(string) + 1;
out_size = sizeof(convert_charset_buff);
in_ptr = string;
out_ptr = convert_charset_buff;
iconv(iconv_d, NULL, &in_size, &out_ptr, &out_size); /* necessary to reset
* state information */
while (in_size > 0)
{
nconv = iconv(iconv_d, &in_ptr, &in_size, &out_ptr, &out_size);
if (nconv == (size_t) -1)
{
printf("WARNING: cannot convert charset of string \"%s\".\n",
string);
strcpy(convert_charset_buff, string);
return convert_charset_buff;
}
}
*out_ptr = 0; /* terminate output string */
return convert_charset_buff;
}
#endif
int
check_table(PGconn *conn, char *table)
{
char *q = "select relname from pg_class where "
"relkind='r' and relname !~* '^pg'";
PGresult *res;
int i = 0;
if (!(res = PQexec(conn, q)))
{
printf("%s\n", PQerrorMessage(conn));
return 0;
}
for (i = 0; i < PQntuples(res); i++)
{
if (!strcmp(table, PQgetvalue(res, i, PQfnumber(res, "relname"))))
return 1;
}
return 0;
}
void
usage(void)
{
printf("dbf2pg\n"
"usage: dbf2pg [-u | -l] [-h hostname] [-W] [-U username]\n"
" [-B transaction_size] [-F charset_from [-T charset_to]]\n"
" [-s oldname=[newname][,oldname=[newname][...]]] [-d dbase]\n"
" [-t table] [-c | -D] [-f] [-v[v]] dbf-file\n");
}
/* patch submitted by Jeffrey Y. Sue <jysue@aloha.net> */
/* Provides functionality for substituting dBase-fieldnames for others */
/* Mainly for avoiding conflicts between fieldnames and SQL-reserved */
/* keywords */
void
do_substitute(char *subarg, dbhead * dbh)
{
/* NOTE: subarg is modified in this function */
int i,
bad;
char *p,
*oldname,
*newname;
if (!subarg)
return;
if (verbose > 1)
printf("Substituting new field names\n");
/* use strstr instead of strtok because of possible empty tokens */
oldname = subarg;
while (oldname && strlen(oldname) && (p = strstr(oldname, "=")))
{
*p = '\0'; /* mark end of oldname */
newname = ++p; /* point past \0 of oldname */
if (strlen(newname))
{ /* if not an empty string */
p = strstr(newname, ",");
if (p)
{
*p = '\0'; /* mark end of newname */
p++; /* point past where the comma was */
}
}
if (strlen(newname) >= DBF_NAMELEN)
{
printf("Truncating new field name %s to %d chars\n",
newname, DBF_NAMELEN - 1);
newname[DBF_NAMELEN - 1] = '\0';
}
bad = 1;
for (i = 0; i < dbh->db_nfields; i++)
{
if (strcmp(dbh->db_fields[i].db_name, oldname) == 0)
{
bad = 0;
strcpy(dbh->db_fields[i].db_name, newname);
if (verbose > 1)
{
printf("Substitute old:%s new:%s\n",
oldname, newname);
}
break;
}
}
if (bad)
{
printf("Warning: old field name %s not found\n",
oldname);
}
oldname = p;
}
} /* do_substitute */
void
do_create(PGconn *conn, char *table, dbhead * dbh)
{
char *query;
char t[20];
int i,
length;
PGresult *res;
if (verbose > 1)
printf("Building CREATE-clause\n");
if (!(query = (char *) malloc(
(dbh->db_nfields * 40) + 29 + strlen(table))))
{
fprintf(stderr, "Memory allocation error in function do_create\n");
PQfinish(conn);
close(dbh->db_fd);
free(dbh);
exit(1);
}
sprintf(query, "CREATE TABLE %s (", table);
length = strlen(query);
for (i = 0; i < dbh->db_nfields; i++)
{
if (!strlen(dbh->db_fields[i].db_name))
{
continue;
/* skip field if length of name == 0 */
}
if ((strlen(query) != length))
strcat(query, ",");
if (fieldlow)
strtolower(dbh->db_fields[i].db_name);
strcat(query, dbh->db_fields[i].db_name);
switch (dbh->db_fields[i].db_type)
{
case 'D':
strcat(query, " date");
break;
case 'C':
if (dbh->db_fields[i].db_flen > 1)
{
strcat(query, " varchar");
snprintf(t, 20, "(%d)",
dbh->db_fields[i].db_flen);
strcat(query, t);
}
else
strcat(query, " char");
break;
case 'N':
if (dbh->db_fields[i].db_dec != 0)
strcat(query, " real");
else
strcat(query, " int");
break;
case 'L':
strcat(query, " char");
break;
case 'M':
strcat(query, " text");
break;
}
}
strcat(query, ")");
if (verbose > 1)
{
printf("Sending create-clause\n");
printf("%s\n", query);
}
if ((res = PQexec(conn, query)) == NULL ||
PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, "Error creating table!\n");
fprintf(stderr, "Detailed report: %s\n", PQerrorMessage(conn));
close(dbh->db_fd);
free(dbh);
free(query);
PQfinish(conn);
exit(1);
}
PQclear(res);
free(query);
}
/* FIXME: can be optimized to not use strcat, but it is worth the effort? */
void
do_inserts(PGconn *conn, char *table, dbhead * dbh)
{
PGresult *res;
field *fields;
int i,
h,
j,
result;
char *query,
*foo;
char pgdate[11];
if (verbose > 1)
printf("Inserting records\n");
h = 2; /* 2 because of terminating \n\0 */
for (i = 0; i < dbh->db_nfields; i++)
{
h += dbh->db_fields[i].db_flen > 2 ?
dbh->db_fields[i].db_flen :
2; /* account for possible NULL values (\N) */
h += 1; /* the delimiter */
}
/*
* make sure we can build the COPY query, note that we don't need to just
* add this value, since the COPY query is a separate query (see below)
*/
if (h < 17 + strlen(table))
h = 17 + strlen(table);
if (!(query = (char *) malloc(h)))
{
PQfinish(conn);
fprintf(stderr,
"Memory allocation error in function do_inserts (query)\n");
close(dbh->db_fd);
free(dbh);
exit(1);
}
if ((fields = dbf_build_record(dbh)) == (field *) DBF_ERROR)
{
fprintf(stderr,
"Couldn't allocate memory for record in do_insert\n");
PQfinish(conn);
free(query);
dbf_close(dbh);
exit(1);
}
if (end == 0) /* "end" is a user option, if not specified, */
end = dbh->db_records; /* then all records are processed. */
if (t_block == 0) /* user not specified transaction block size */
t_block = end - begin; /* then we set it to be the full data */
for (i = begin; i < end; i++)
{
/* we need to start a new transaction and COPY statement */
if (((i - begin) % t_block) == 0)
{
if (verbose > 1)
fprintf(stderr, "Transaction: START\n");
res = PQexec(conn, "BEGIN");
if (res == NULL)
{
fprintf(stderr, "Error starting transaction!\n");
fprintf(stderr, "Detailed report: %s\n", PQerrorMessage(conn));
exit(1);
}
sprintf(query, "COPY %s FROM stdin", table);
res = PQexec(conn, query);
if (res == NULL)
{
fprintf(stderr, "Error starting COPY!\n");
fprintf(stderr, "Detailed report: %s\n", PQerrorMessage(conn));
exit(1);
}
}
/* build line and submit */
result = dbf_get_record(dbh, fields, i);
if (result == DBF_VALID)
{
query[0] = '\0';
j = 0; /* counter for fields in the output */
for (h = 0; h < dbh->db_nfields; h++)
{
if (!strlen(fields[h].db_name)) /* When the new fieldname is
* empty, the field is skipped */
continue;
else
j++;
if (j > 1) /* not for the first field! */
strcat(query, "\t"); /* COPY statement field
* separator */
if (upper)
strtoupper((char *) fields[h].db_contents);
if (lower)
strtolower((char *) fields[h].db_contents);
foo = (char *) fields[h].db_contents;
#ifdef HAVE_ICONV_H
if (charset_from)
foo = convert_charset(foo);
#endif
foo = Escape_db(foo);
/* handle the date first - liuk */
if (fields[h].db_type == 'D')
{
if (strlen(foo) == 0)
{
/* assume empty string means a NULL */
strcat(query, "\\N");
}
else if (strlen(foo) == 8 &&
strspn(foo, "0123456789") == 8)
{
/* transform YYYYMMDD to Postgres style */
snprintf(pgdate, 11, "%c%c%c%c-%c%c-%c%c",
foo[0], foo[1], foo[2], foo[3],
foo[4], foo[5], foo[6], foo[7]);
strcat(query, pgdate);
}
else
{
/* try to insert it as-is */
strcat(query, foo);
}
}
else if (fields[h].db_type == 'N')
{
if (strlen(foo) == 0)
{
/* assume empty string means a NULL */
strcat(query, "\\N");
}
else
strcat(query, foo);
}
else
{
strcat(query, foo); /* must be character */
}
}
strcat(query, "\n");
if ((verbose > 1) && ((i % 100) == 0))
{ /* Only show every 100 */
printf("Inserting record %d\n", i); /* records. */
}
PQputline(conn, query);
}
/* we need to end this copy and transaction */
if (((i - begin) % t_block) == t_block - 1)
{
if (verbose > 1)
fprintf(stderr, "Transaction: END\n");
PQputline(conn, "\\.\n");
if (PQendcopy(conn) != 0)
{
fprintf(stderr, "Something went wrong while copying. Check "
"your tables!\n");
exit(1);
}
res = PQexec(conn, "END");
if (res == NULL)
{
fprintf(stderr, "Error committing work!\n");
fprintf(stderr, "Detailed report: %s\n", PQerrorMessage(conn));
exit(1);
}
}
}
/* last row copied in, end copy and transaction */
/* remember, i is now 1 greater then when we left the loop */
if (((i - begin) % t_block) != 0)
{
if (verbose > 1)
fprintf(stderr, "Transaction: END\n");
PQputline(conn, "\\.\n");
if (PQendcopy(conn) != 0)
{
fprintf(stderr, "Something went wrong while copying. Check "
"your tables!\n");
}
res = PQexec(conn, "END");
if (res == NULL)
{
fprintf(stderr, "Error committing work!\n");
fprintf(stderr, "Detailed report: %s\n", PQerrorMessage(conn));
exit(1);
}
}
dbf_free_record(dbh, fields);
free(query);
}
int
main(int argc, char **argv)
{
PGconn *conn;
int i;
extern int optind;
extern char *optarg;
char *query;
dbhead *dbh;
while ((i = getopt(argc, argv, "DWflucvh:b:e:d:t:s:B:U:F:T:")) != -1)
{
switch (i)
{
case 'D':
if (create)
{
usage();
printf("Can't use -c and -D at the same time!\n");
exit(1);
}
del = 1;
break;
case 'W':
password = simple_prompt("Password: ", 100, 0);
break;
case 'f':
fieldlow = 1;
break;
case 'v':
verbose++;
break;
case 'c':
if (del)
{
usage();
printf("Can't use -c and -D at the same time!\n");
exit(1);
}
create = 1;
break;
case 'l':
lower = 1;
break;
case 'u':
if (lower)
{
usage();
printf("Can't use -u and -l at the same time!\n");
exit(1);
}
upper = 1;
break;
case 'b':
begin = atoi(optarg);
break;
case 'e':
end = atoi(optarg);
break;
case 'h':
host = (char *) strdup(optarg);
break;
case 'd':
dbase = (char *) strdup(optarg);
break;
case 't':
table = (char *) strdup(optarg);
break;
case 's':
subarg = (char *) strdup(optarg);
break;
case 'B':
t_block = atoi(optarg);
break;
case 'U':
username = (char *) strdup(optarg);
break;
case 'F':
#ifdef HAVE_ICONV_H
charset_from = (char *) strdup(optarg);
#else
printf("WARNING: dbf2pg was compiled without iconv support, ignoring -F option\n");
#endif
break;
#ifdef HAVE_ICONV_H
case 'T':
charset_to = (char *) strdup(optarg);
break;
#endif
case ':':
usage();
printf("missing argument!\n");
exit(1);
break;
case '?':
usage();
/*
* FIXME: Ivan thinks this is bad: printf("unknown argument:
* %s\n", argv[0]);
*/
exit(1);
break;
default:
break;
}
}
argc -= optind;
argv = &argv[optind];
if (argc != 1)
{
usage();
if (username)
free(username);
if (password)
free(password);
exit(1);
}
#ifdef HAVE_ICONV_H
if (charset_from)
{
if (verbose > 1)
printf("Setting conversion from charset \"%s\" to \"%s\".\n",
charset_from, charset_to);
iconv_d = iconv_open(charset_to, charset_from);
if (iconv_d == (iconv_t) - 1)
{
printf("Cannot convert from charset \"%s\" to charset \"%s\".\n",
charset_from, charset_to);
exit(1);
}
}
#endif
if (verbose > 1)
printf("Opening dbf-file\n");
setlocale(LC_ALL, ""); /* fix for isprint() */
if ((dbh = dbf_open(argv[0], O_RDONLY)) == (dbhead *) - 1)
{
fprintf(stderr, "Couldn't open xbase-file %s\n", argv[0]);
if (username)
free(username);
if (password)
free(password);
#ifdef HAVE_ICONV_H
if (charset_from)
iconv_close(iconv_d);
#endif
exit(1);
}
if (fieldlow)
for (i = 0; i < dbh->db_nfields; i++)
strtolower(dbh->db_fields[i].db_name);
if (verbose)
{
printf("dbf-file: %s, PG-dbase: %s, PG-table: %s\n", argv[0],
dbase,
table);
printf("Number of records: %ld\n", dbh->db_records);
printf("NAME:\t\tLENGTH:\t\tTYPE:\n");
printf("-------------------------------------\n");
for (i = 0; i < dbh->db_nfields; i++)
{
printf("%-12s\t%7d\t\t%5c\n", dbh->db_fields[i].db_name,
dbh->db_fields[i].db_flen,
dbh->db_fields[i].db_type);
}
}
if (verbose > 1)
printf("Making connection to PG-server\n");
conn = PQsetdbLogin(host, NULL, NULL, NULL, dbase, username, password);
if (PQstatus(conn) != CONNECTION_OK)
{
fprintf(stderr, "Couldn't get a connection with the ");
fprintf(stderr, "designated host!\n");
fprintf(stderr, "Detailed report: %s\n", PQerrorMessage(conn));
close(dbh->db_fd);
free(dbh);
if (username)
free(username);
if (password)
free(password);
#ifdef HAVE_ICONV_H
if (charset_from)
iconv_close(iconv_d);
#endif
exit(1);
}
PQexec(conn, "SET search_path = public");
/* Substitute field names */
do_substitute(subarg, dbh);
/* create table if specified, else check if target table exists */
if (!create)
{
if (!check_table(conn, table))
{
printf("Table does not exist!\n");
if (username)
free(username);
if (password)
free(password);
#ifdef HAVE_ICONV_H
if (charset_from)
iconv_close(iconv_d);
#endif
exit(1);
}
if (del)
{
if (!(query = (char *) malloc(13 + strlen(table))))
{
printf("Memory-allocation error in main (delete)!\n");
close(dbh->db_fd);
free(dbh);
PQfinish(conn);
if (username)
free(username);
if (password)
free(password);
#ifdef HAVE_ICONV_H
if (charset_from)
iconv_close(iconv_d);
#endif
exit(1);
}
if (verbose > 1)
printf("Deleting from original table\n");
sprintf(query, "DELETE FROM %s", table);
PQexec(conn, query);
free(query);
}
}
else
{
if (!(query = (char *) malloc(12 + strlen(table))))
{
printf("Memory-allocation error in main (drop)!\n");
close(dbh->db_fd);
free(dbh);
PQfinish(conn);
if (username)
free(username);
if (password)
free(password);
#ifdef HAVE_ICONV_H
if (charset_from)
iconv_close(iconv_d);
#endif
exit(1);
}
if (verbose > 1)
printf("Dropping original table (if one exists)\n");
sprintf(query, "DROP TABLE %s", table);
PQexec(conn, query);
free(query);
/* Build a CREATE-clause
*/
do_create(conn, table, dbh);
}
/* Build an INSERT-clause
*/
PQexec(conn, "SET DATESTYLE TO 'ISO';");
do_inserts(conn, table, dbh);
if (verbose > 1)
printf("Closing up....\n");
close(dbh->db_fd);
free(dbh);
PQfinish(conn);
if (username)
free(username);
if (password)
free(password);
#ifdef HAVE_ICONV_H
if (charset_from)
iconv_close(iconv_d);
#endif
exit(0);
}

View File

@ -1,50 +0,0 @@
/* $PostgreSQL: pgsql/contrib/dbase/endian.c,v 1.4 2006/03/11 04:38:28 momjian Exp $ */
/* Maarten Boekhold (maarten.boekhold@reuters.com) oktober 1995 */
#include <sys/types.h>
#include "dbf.h"
/*
* routine to change little endian long to host long
*/
long
get_long(u_char *cp)
{
long ret;
ret = *cp++;
ret += ((*cp++) << 8);
ret += ((*cp++) << 16);
ret += ((*cp++) << 24);
return ret;
}
void
put_long(u_char *cp, long lval)
{
cp[0] = lval & 0xff;
cp[1] = (lval >> 8) & 0xff;
cp[2] = (lval >> 16) & 0xff;
cp[3] = (lval >> 24) & 0xff;
}
/*
* routine to change little endian short to host short
*/
short
get_short(u_char *cp)
{
short ret;
ret = *cp++;
ret += ((*cp++) << 8);
return ret;
}
void
put_short(u_char *cp, short sval)
{
cp[0] = sval & 0xff;
cp[1] = (sval >> 8) & 0xff;
}

View File

@ -1,7 +0,0 @@
-- Adjust this setting to control where the objects get created.
SET search_path = public;
CREATE TRIGGER "MyTableName_Trig"
AFTER INSERT OR DELETE OR UPDATE ON "MyTableName"
FOR EACH ROW EXECUTE PROCEDURE "recordchange" ();

File diff suppressed because it is too large Load Diff

View File

@ -1,16 +0,0 @@
# $PostgreSQL: pgsql/contrib/dbmirror/Makefile,v 1.5 2005/09/27 17:13:01 tgl Exp $
MODULES = pending
SCRIPTS = clean_pending.pl DBMirror.pl
DATA = AddTrigger.sql MirrorSetup.sql slaveDatabase.conf
DOCS = README.dbmirror
ifdef USE_PGXS
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
else
subdir = contrib/dbmirror
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif

View File

@ -1,54 +0,0 @@
BEGIN;
CREATE FUNCTION "recordchange" () RETURNS trigger
AS '$libdir/pending', 'recordchange'
LANGUAGE C;
CREATE TABLE dbmirror_MirrorHost (
MirrorHostId serial PRIMARY KEY,
SlaveName varchar NOT NULL
);
CREATE TABLE dbmirror_Pending (
SeqId serial PRIMARY KEY,
TableName name NOT NULL,
Op character,
XID integer NOT NULL
);
CREATE INDEX dbmirror_Pending_XID_Index ON dbmirror_Pending (XID);
CREATE TABLE dbmirror_PendingData (
SeqId integer NOT NULL,
IsKey boolean NOT NULL,
Data varchar,
PRIMARY KEY (SeqId, IsKey) ,
FOREIGN KEY (SeqId) REFERENCES dbmirror_Pending (SeqId) ON UPDATE CASCADE ON DELETE CASCADE
);
CREATE TABLE dbmirror_MirroredTransaction (
XID integer NOT NULL,
LastSeqId integer NOT NULL,
MirrorHostId integer NOT NULL,
PRIMARY KEY (XID, MirrorHostId),
FOREIGN KEY (MirrorHostId) REFERENCES dbmirror_MirrorHost (MirrorHostId) ON UPDATE CASCADE ON DELETE CASCADE,
FOREIGN KEY (LastSeqId) REFERENCES dbmirror_Pending (SeqId) ON UPDATE CASCADE ON DELETE CASCADE
);
UPDATE pg_proc SET proname='nextval_pg' WHERE proname='nextval';
CREATE FUNCTION pg_catalog.nextval(regclass) RETURNS bigint
AS '$libdir/pending', 'nextval_mirror'
LANGUAGE C STRICT;
UPDATE pg_proc set proname='setval_pg' WHERE proname='setval';
CREATE FUNCTION pg_catalog.setval(regclass, bigint, boolean) RETURNS bigint
AS '$libdir/pending', 'setval3_mirror'
LANGUAGE C STRICT;
CREATE FUNCTION pg_catalog.setval(regclass, bigint) RETURNS bigint
AS '$libdir/pending', 'setval_mirror'
LANGUAGE C STRICT;
COMMIT;

View File

@ -1,254 +0,0 @@
DBMirror - PostgreSQL Database Mirroring
===================================================
DBMirror is a database mirroring system developed for the PostgreSQL
database Written and maintained by Steven Singer(ssinger@navtechinc.com)
(c) 2001-2004 Navtech Systems Support Inc.
ALL RIGHTS RESERVED
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose, without fee, and without a written agreement
is hereby granted, provided that the above copyright notice and this
paragraph and the following two paragraphs appear in all copies.
IN NO EVENT SHALL THE AUTHOR OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR
DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
DOCUMENTATION, EVEN IF THE AUTHOR OR DISTRIBUTORS HAVE BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
THE AUTHOR AND DISTRIBUTORS SPECIFICALLY DISCLAIMS ANY WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS
ON AN "AS IS" BASIS, AND THE AUTHOR AND DISTRIBUTORS HAS NO OBLIGATIONS TO
PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
Overview
--------------------------------------------------------------------
The mirroring system is trigger based and provides the following key features:
-Support for multiple mirror slaves
-Transactions are maintained
-Per table selection of what gets mirrored.
The system is based on the idea that a master database exists where all
edits are made to the tables being mirrored. A trigger attached to the
tables being mirrored runs logging information about the edit to
the Pending table and PendingData table.
A perl script(DBMirror.pl) runs continuously for each slave database(A database
that the change is supposed to be mirrored to) examining the Pending
table; searching for transactions that need to be sent to that particular slave
database. Those transactions are then mirrored to the slave database and
the MirroredTransaction table is updated to reflect that the transaction has
been sent.
If the transaction has been sent to all know slave hosts (All entries
in the MirrorHost table) then all records of it are purged from the
Pending tables.
Requirements:
---------------------------------
-PostgreSQL-8.1 (Older versions are no longer supported)
-Perl 5.6 or 5.8 (Other versions might work)
-PgPerl (http://gborg.postgresql.org/project/pgperl/projdisplay.php)
Upgrading from versions prior to 8.0
---------------------------------------
Users upgrading from a version of dbmirror prior to the one shipped with
Postgresql 8.0 will need to perform the following steps
1. Dump the database then drop it (dropdb no not use the -C option)
2. Create database with createdb.
3. Run psql databasename -f MirrorSetup.sql
4. Restore the database(do not use the -C option of pg_dump/pg_restore)
5. run the SQL commands: DROP "Pending";DROP "PendingData"; DROP "MirrorHost";
DROP "MirroredTransaction";
The above steps are needed A) Because the names of the tables used by dbmirror
to store data have changed and B) In order for sequences to be mirrored properly
all serial types must be recreated.
Installation Instructions
------------------------------------------------------------------------
1) Compile pending.c
The file pending.c contains the recordchange trigger. This runs every
time a row inside of a table being mirrored changes.
To build the trigger run make on the "Makefile" in the DBMirror directory.
PostgreSQL-8.0 Make Instructions:
If you have already run "configure" in the top (pgsql) directory
then run "make" in the dbmirror directory to compile the trigger.
You should now have a file named pending.so that contains the trigger.
Install this file in your Postgresql lib directory (/usr/local/pgsql/lib)
2) Run MirrorSetup.sql
This file contains SQL commands to setup the Mirroring environment.
This includes
-Telling PostgreSQL about the "recordchange" trigger function.
-Creating the dbmirror_Pending,dbmirror_PendingData,dbmirror_MirrorHost,
dbmirror_MirroredTransaction tables
To execute the script use psql as follows
"psql -f MirrorSetup.sql MyDatabaseName"
where MyDatabaseName is the name of the database you wish to install mirroring
on(Your master).
3) Create slaveDatabase.conf files.
Each slave database needs its own configuration file for the
DBMirror.pl script. See slaveDatabase.conf for a sample.
The master settings refer to the master database(The one that is
being mirrored).
The slave settings refer to the database that the data is being
mirrored to.
The slaveName setting in the configuration file must match the slave
name specified in the dbmirror_MirrorHost table.
DBMirror.pl can be run in two modes of operation:
A) It can connect directly to the slave database. To do this specify
a slave database name and optional host and port along with a username
and password. See slaveDatabase.conf for details.
The master user must have sufficient permissions to modify the Pending
tables and to read all of the tables being mirrored.
The slave user must have enough permissions on the slave database to
modify(INSERT,UPDATE,DELETE) any tables on the slave system that are being
mirrored.
B) The SQL statements that should be executed on the slave can be
written to files which can then be executed slave database through
psql. This would be suitable for setups where their is no direct
connection between the slave database and the master. A file is
generated for each transaction in the directory specified by
TransactionFileDirectory. The file name contains the date/time the
file was created along with the transaction id.
4) Add the trigger to tables.
Execute the SQL code in AddTrigger.sql once for each table that should
be mirrored. Replace MyTableName with the name of the table that should
be mirrored.
NOTE: DBMirror requires that every table being mirrored have a primary key
defined.
5) Create the slave database.
The DBMirror system keeps the contents of mirrored tables identical on the
master and slave databases. When you first install the mirror triggers the
master and slave databases must be the same.
If you are starting with an empty master database then the slave should
be empty as well. Otherwise use pg_dump to ensure that the slave database
tables are initially identical to the master.
6) Add entries in the dbmirror_MirrorHost table.
Each slave database must have an entry in the dbmirror_MirrorHost table.
The name of the host in the dbmirror_MirrorHost table must exactly match the
slaveHost variable for that slave in the configuration file.
For example
INSERT INTO dbmirror_MirrorHost (SlaveName) VALUES ('backup_system');
6) Start DBMirror.pl
DBMirror.pl is the perl script that handles the mirroring.
It requires the Perl library Pg(See http://gborg.postgresql.org/project/pgperl/projdisplay.php)
It takes its configuration file as an argument(The one from step 3)
One instance of DBMirror.pl runs for each slave machine that is receiving
mirrored data.
Any errors are printed to standard out and emailed to the address specified in
the configuration file.
DBMirror can be run from the master, the slave, or a third machine as long
as it is able to access both the master and slave databases(not
required if SQL files are being generated)
7) Periodically run clean_pending.pl
clean_pending.pl cleans out any entries from the Pending tables that
have already been mirrored to all hosts in the MirrorHost table.
It uses the same configuration file as DBMirror.pl.
Normally DBMirror.pl will clean these tables as it goes but in some
circumstances this will not happen.
For example if a transaction has been mirrored to all slaves except for
one, then that host is removed from the MirrorHost table(It stops being
a mirror slave) the transactions that had already been mirrored to
all the other hosts will not be deleted from the Pending tables by
DBMirror.pl since DBMirror.pl will run against these transactions again
since they have already been sent to all the other hosts.
clean_pending.pl will remove these transactions.
TODO(Current Limitations)
----------
-Support for selective mirroring based on the content of data.
-Support for BLOB's.
-Support for multi-master mirroring with conflict resolution.
-Better support for dealing with Schema changes.
Significant Changes Since 7.4
----------------
-Support for mirroring SEQUENCE's
-Support for unix domain sockets
-Support for outputting slave SQL statements to a file
-Changed the names of replication tables are now named
dbmirror_pending etc..
Credits
-----------
Achilleus Mantzios <achill@matrix.gatewaynet.com>
Steven Singer
Navtech Systems Support Inc.
ssinger@navtechinc.com

View File

@ -1,106 +0,0 @@
#!/usr/bin/perl
# clean_pending.pl
# This perl script removes entries from the pending,pendingKeys,
# pendingDeleteData tables that have already been mirrored to all hosts.
#
#
#
# Written by Steven Singer (ssinger@navtechinc.com)
# (c) 2001-2002 Navtech Systems Support Inc.
# Released under the GNU Public License version 2. See COPYING.
#
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
##############################################################################
# $PostgreSQL: pgsql/contrib/dbmirror/clean_pending.pl,v 1.5 2004/09/10 04:31:06 neilc Exp $
##############################################################################
=head1 NAME
clean_pending.pl - A Perl script to remove old entries from the
pending, pendingKeys, and pendingDeleteData tables.
=head1 SYNPOSIS
clean_pending.pl databasename
=head1 DESCRIPTION
This Perl script connects to the database specified as a command line argument
on the local system. It uses a hard-coded username and password.
It then removes any entries from the pending, pendingDeleteData, and
pendingKeys tables that have already been sent to all hosts in mirrorHosts.
=cut
BEGIN {
# add in a global path to files
#Ensure that Pg is in the path.
}
use strict;
use Pg;
if ($#ARGV != 0) {
die "usage: clean_pending.pl configFile\n";
}
if( ! defined do $ARGV[0]) {
die("Invalid Configuration file $ARGV[0]");
}
#connect to the database.
my $connectString = "host=$::masterHost dbname=$::masterDb user=$::masterUser password=$::masterPassword";
my $dbConn = Pg::connectdb($connectString);
unless($dbConn->status == PGRES_CONNECTION_OK) {
printf("Can't connect to database\n");
die;
}
my $result = $dbConn->exec("BEGIN");
unless($result->resultStatus == PGRES_COMMAND_OK) {
die $dbConn->errorMessage;
}
#delete all transactions that have been sent to all mirrorhosts
#or delete everything if no mirror hosts are defined.
# Postgres takes the "SELECT COUNT(*) FROM dbmirror_MirrorHost and makes it into
# an InitPlan. EXPLAIN show's this.
my $deletePendingQuery = 'DELETE FROM dbmirror_Pending WHERE (SELECT ';
$deletePendingQuery .= ' COUNT(*) FROM dbmirror_MirroredTransaction WHERE ';
$deletePendingQuery .= ' XID=dbmirror_Pending.XID) = (SELECT COUNT(*) FROM ';
$deletePendingQuery .= ' dbmirror_MirrorHost) OR (SELECT COUNT(*) FROM ';
$deletePendingQuery .= ' dbmirror_MirrorHost) = 0';
my $result = $dbConn->exec($deletePendingQuery);
unless ($result->resultStatus == PGRES_COMMAND_OK ) {
printf($dbConn->errorMessage);
die;
}
$dbConn->exec("COMMIT");
$result = $dbConn->exec('VACUUM dbmirror_Pending');
unless ($result->resultStatus == PGRES_COMMAND_OK) {
printf($dbConn->errorMessage);
}
$result = $dbConn->exec('VACUUM dbmirror_PendingData');
unless($result->resultStatus == PGRES_COMMAND_OK) {
printf($dbConn->errorMessage);
}
$result = $dbConn->exec('VACUUM dbmirror_MirroredTransaction');
unless($result->resultStatus == PGRES_COMMAND_OK) {
printf($dbConn->errorMessage);
}

View File

@ -1,711 +0,0 @@
/****************************************************************************
* pending.c
* $Id: pending.c,v 1.26 2006/07/11 17:26:58 momjian Exp $
* $PostgreSQL: pgsql/contrib/dbmirror/pending.c,v 1.26 2006/07/11 17:26:58 momjian Exp $
*
* This file contains a trigger for Postgresql-7.x to record changes to tables
* to a pending table for mirroring.
* All tables that should be mirrored should have this trigger hooked up to it.
*
* Written by Steven Singer (ssinger@navtechinc.com)
* (c) 2001-2002 Navtech Systems Support Inc.
* ALL RIGHTS RESERVED
*
* Permission to use, copy, modify, and distribute this software and its
* documentation for any purpose, without fee, and without a written agreement
* is hereby granted, provided that the above copyright notice and this
* paragraph and the following two paragraphs appear in all copies.
*
* IN NO EVENT SHALL THE AUTHOR OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR
* DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
* LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
* DOCUMENTATION, EVEN IF THE AUTHOR OR DISTRIBUTORS HAVE BEEN ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*
* THE AUTHOR AND DISTRIBUTORS SPECIFICALLY DISCLAIMS ANY WARRANTIES,
* INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
* AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS
* ON AN "AS IS" BASIS, AND THE AUTHOR AND DISTRIBUTORS HAS NO OBLIGATIONS TO
* PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
*
*
***************************************************************************/
#include "postgres.h"
#include "executor/spi.h"
#include "commands/sequence.h"
#include "commands/trigger.h"
#include "utils/lsyscache.h"
#include "utils/array.h"
PG_MODULE_MAGIC;
enum FieldUsage
{
PRIMARY = 0, NONPRIMARY, ALL, NUM_FIELDUSAGE
};
int storePending(char *cpTableName, HeapTuple tBeforeTuple,
HeapTuple tAfterTuple,
TupleDesc tTupdesc,
Oid tableOid,
char cOp);
int storeKeyInfo(char *cpTableName, HeapTuple tTupleData, TupleDesc tTuplDesc,
Oid tableOid);
int storeData(char *cpTableName, HeapTuple tTupleData,
TupleDesc tTupleDesc, Oid tableOid, int iIncludeKeyData);
int2vector *getPrimaryKey(Oid tblOid);
char *packageData(HeapTuple tTupleData, TupleDesc tTupleDecs, Oid tableOid,
enum FieldUsage eKeyUsage);
#define BUFFER_SIZE 256
#define MAX_OID_LEN 10
/*#define DEBUG_OUTPUT 1 */
extern Datum recordchange(PG_FUNCTION_ARGS);
PG_FUNCTION_INFO_V1(recordchange);
#if defined DEBUG_OUTPUT
#define debug_msg2(x,y) elog(NOTICE,x,y)
#define debug_msg(x) elog(NOTICE,x)
#define debug_msg3(x,y,z) elog(NOTICE,x,y,z)
#else
#define debug_msg2(x,y)
#define debug_msg(x)
#define debug_msg3(x,y,z)
#endif
extern Datum setval_mirror(PG_FUNCTION_ARGS);
extern Datum setval3_mirror(PG_FUNCTION_ARGS);
extern Datum nextval_mirror(PG_FUNCTION_ARGS);
static void saveSequenceUpdate(Oid relid, int64 nextValue, bool iscalled);
/*****************************************************************************
* The entry point for the trigger function.
* The Trigger takes a single SQL 'text' argument indicating the name of the
* table the trigger was applied to. If this name is incorrect so will the
* mirroring.
****************************************************************************/
Datum
recordchange(PG_FUNCTION_ARGS)
{
TriggerData *trigdata;
TupleDesc tupdesc;
HeapTuple beforeTuple = NULL;
HeapTuple afterTuple = NULL;
HeapTuple retTuple = NULL;
char *tblname;
char op = 0;
char *schemaname;
char *fullyqualtblname;
char *pkxpress = NULL;
if (fcinfo->context != NULL)
{
if (SPI_connect() < 0)
{
ereport(ERROR, (errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("dbmirror:recordchange could not connect to SPI")));
return -1;
}
trigdata = (TriggerData *) fcinfo->context;
/* Extract the table name */
tblname = SPI_getrelname(trigdata->tg_relation);
#ifndef NOSCHEMAS
schemaname = get_namespace_name(RelationGetNamespace(trigdata->tg_relation));
fullyqualtblname = SPI_palloc(strlen(tblname) +
strlen(schemaname) + 6);
sprintf(fullyqualtblname, "\"%s\".\"%s\"",
schemaname, tblname);
#else
fullyqualtblname = SPI_palloc(strlen(tblname) + 3);
sprintf(fullyqualtblname, "\"%s\"", tblname);
#endif
tupdesc = trigdata->tg_relation->rd_att;
if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))
{
retTuple = trigdata->tg_newtuple;
beforeTuple = trigdata->tg_trigtuple;
afterTuple = trigdata->tg_newtuple;
op = 'u';
}
else if (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event))
{
retTuple = trigdata->tg_trigtuple;
afterTuple = trigdata->tg_trigtuple;
op = 'i';
}
else if (TRIGGER_FIRED_BY_DELETE(trigdata->tg_event))
{
retTuple = trigdata->tg_trigtuple;
beforeTuple = trigdata->tg_trigtuple;
op = 'd';
}
else
{
ereport(ERROR, (errcode(ERRCODE_TRIGGERED_ACTION_EXCEPTION),
errmsg("dbmirror:recordchange Unknown operation")));
}
if (storePending(fullyqualtblname, beforeTuple, afterTuple,
tupdesc, retTuple->t_tableOid, op))
{
/* An error occoured. Skip the operation. */
ereport(ERROR,
(errcode(ERRCODE_TRIGGERED_ACTION_EXCEPTION),
errmsg("operation could not be mirrored")));
return PointerGetDatum(NULL);
}
debug_msg("dbmirror:recordchange returning on success");
SPI_pfree(fullyqualtblname);
if (pkxpress != NULL)
SPI_pfree(pkxpress);
SPI_finish();
return PointerGetDatum(retTuple);
}
else
{
/*
* Not being called as a trigger.
*/
return PointerGetDatum(NULL);
}
}
/*****************************************************************************
* Constructs and executes an SQL query to write a record of this tuple change
* to the pending table.
*****************************************************************************/
int
storePending(char *cpTableName, HeapTuple tBeforeTuple,
HeapTuple tAfterTuple,
TupleDesc tTupDesc,
Oid tableOid,
char cOp)
{
char *cpQueryBase = "INSERT INTO dbmirror_pending (TableName,Op,XID) VALUES ($1,$2,$3)";
int iResult = 0;
HeapTuple tCurTuple;
char nulls[3] = " ";
/* Points the current tuple(before or after) */
Datum saPlanData[3];
Oid taPlanArgTypes[4] = {NAMEOID,
CHAROID,
INT4OID};
void *vpPlan;
tCurTuple = tBeforeTuple ? tBeforeTuple : tAfterTuple;
vpPlan = SPI_prepare(cpQueryBase, 3, taPlanArgTypes);
if (vpPlan == NULL)
ereport(ERROR, (errcode(ERRCODE_TRIGGERED_ACTION_EXCEPTION),
errmsg("dbmirror:storePending error creating plan")));
saPlanData[0] = PointerGetDatum(cpTableName);
saPlanData[1] = CharGetDatum(cOp);
saPlanData[2] = Int32GetDatum(GetCurrentTransactionId());
iResult = SPI_execp(vpPlan, saPlanData, nulls, 1);
if (iResult < 0)
elog(NOTICE, "storedPending fired (%s) returned %d",
cpQueryBase, iResult);
debug_msg("dbmirror:storePending row successfully stored in pending table");
if (cOp == 'd')
{
/**
* This is a record of a delete operation.
* Just store the key data.
*/
iResult = storeKeyInfo(cpTableName,
tBeforeTuple, tTupDesc, tableOid);
}
else if (cOp == 'i')
{
/**
* An Insert operation.
* Store all data
*/
iResult = storeData(cpTableName, tAfterTuple,
tTupDesc, tableOid, TRUE);
}
else
{
/* op must be an update. */
iResult = storeKeyInfo(cpTableName, tBeforeTuple,
tTupDesc, tableOid);
iResult = iResult ? iResult :
storeData(cpTableName, tAfterTuple, tTupDesc,
tableOid, TRUE);
}
debug_msg("dbmirror:storePending done storing keyinfo");
return iResult;
}
int
storeKeyInfo(char *cpTableName, HeapTuple tTupleData,
TupleDesc tTupleDesc, Oid tableOid)
{
Oid saPlanArgTypes[1] = {NAMEOID};
char *insQuery = "INSERT INTO dbmirror_pendingdata (SeqId,IsKey,Data) VALUES(currval('dbmirror_pending_seqid_seq'),'t',$1)";
void *pplan;
Datum saPlanData[1];
char *cpKeyData;
int iRetCode;
pplan = SPI_prepare(insQuery, 1, saPlanArgTypes);
if (pplan == NULL)
{
elog(NOTICE, "could not prepare INSERT plan");
return -1;
}
/* pplan = SPI_saveplan(pplan); */
cpKeyData = packageData(tTupleData, tTupleDesc, tableOid, PRIMARY);
if (cpKeyData == NULL)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
/* cpTableName already contains quotes... */
errmsg("there is no PRIMARY KEY for table %s",
cpTableName)));
debug_msg2("dbmirror:storeKeyInfo key data: %s", cpKeyData);
saPlanData[0] = PointerGetDatum(cpKeyData);
iRetCode = SPI_execp(pplan, saPlanData, NULL, 1);
if (cpKeyData != NULL)
SPI_pfree(cpKeyData);
if (iRetCode != SPI_OK_INSERT)
ereport(ERROR,
(errcode(ERRCODE_TRIGGERED_ACTION_EXCEPTION),
errmsg("error inserting row in pendingDelete")));
debug_msg("insert successful");
return 0;
}
int2vector *
getPrimaryKey(Oid tblOid)
{
char *queryBase;
char *query;
bool isNull;
int2vector *resultKey;
int2vector *tpResultKey;
HeapTuple resTuple;
Datum resDatum;
int ret;
queryBase = "SELECT indkey FROM pg_index WHERE indisprimary='t' AND indrelid=";
query = SPI_palloc(strlen(queryBase) + MAX_OID_LEN + 1);
sprintf(query, "%s%d", queryBase, tblOid);
ret = SPI_exec(query, 1);
SPI_pfree(query);
if (ret != SPI_OK_SELECT || SPI_processed != 1)
return NULL;
resTuple = SPI_tuptable->vals[0];
resDatum = SPI_getbinval(resTuple, SPI_tuptable->tupdesc, 1, &isNull);
tpResultKey = (int2vector *) DatumGetPointer(resDatum);
resultKey = SPI_palloc(VARSIZE(tpResultKey));
memcpy(resultKey, tpResultKey, VARSIZE(tpResultKey));
return resultKey;
}
/******************************************************************************
* Stores a copy of the non-key data for the row.
*****************************************************************************/
int
storeData(char *cpTableName, HeapTuple tTupleData,
TupleDesc tTupleDesc, Oid tableOid, int iIncludeKeyData)
{
Oid planArgTypes[1] = {NAMEOID};
char *insQuery = "INSERT INTO dbmirror_pendingdata (SeqId,IsKey,Data) VALUES(currval('dbmirror_pending_seqid_seq'),'f',$1)";
void *pplan;
Datum planData[1];
char *cpKeyData;
int iRetValue;
pplan = SPI_prepare(insQuery, 1, planArgTypes);
if (pplan == NULL)
{
elog(NOTICE, "could not prepare INSERT plan");
return -1;
}
/* pplan = SPI_saveplan(pplan); */
if (iIncludeKeyData == 0)
cpKeyData = packageData(tTupleData, tTupleDesc,
tableOid, NONPRIMARY);
else
cpKeyData = packageData(tTupleData, tTupleDesc, tableOid, ALL);
planData[0] = PointerGetDatum(cpKeyData);
iRetValue = SPI_execp(pplan, planData, NULL, 1);
if (cpKeyData != 0)
SPI_pfree(cpKeyData);
if (iRetValue != SPI_OK_INSERT)
{
elog(NOTICE, "error inserting row in pendingDelete");
return -1;
}
debug_msg("dbmirror:storeKeyData insert successful");
return 0;
}
/**
* Packages the data in tTupleData into a string of the format
* FieldName='value text' where any quotes inside of value text
* are escaped with a backslash and any backslashes in value text
* are esacped by a second back slash.
*
* tTupleDesc should be a description of the tuple stored in
* tTupleData.
*
* eFieldUsage specifies which fields to use.
* PRIMARY implies include only primary key fields.
* NONPRIMARY implies include only non-primary key fields.
* ALL implies include all fields.
*/
char *
packageData(HeapTuple tTupleData, TupleDesc tTupleDesc, Oid tableOid,
enum FieldUsage eKeyUsage)
{
int iNumCols;
int2vector *tpPKeys = NULL;
int iColumnCounter;
char *cpDataBlock;
int iDataBlockSize;
int iUsedDataBlock;
iNumCols = tTupleDesc->natts;
if (eKeyUsage != ALL)
{
tpPKeys = getPrimaryKey(tableOid);
if (tpPKeys == NULL)
return NULL;
}
if (tpPKeys != NULL)
debug_msg("dbmirror:packageData have primary keys");
cpDataBlock = SPI_palloc(BUFFER_SIZE);
iDataBlockSize = BUFFER_SIZE;
iUsedDataBlock = 0; /* To account for the null */
for (iColumnCounter = 1; iColumnCounter <= iNumCols; iColumnCounter++)
{
int iIsPrimaryKey;
int iPrimaryKeyIndex;
char *cpUnFormatedPtr;
char *cpFormatedPtr;
char *cpFieldName;
char *cpFieldData;
if (eKeyUsage != ALL)
{
/* Determine if this is a primary key or not. */
iIsPrimaryKey = 0;
for (iPrimaryKeyIndex = 0;
iPrimaryKeyIndex < tpPKeys->dim1;
iPrimaryKeyIndex++)
{
if (tpPKeys->values[iPrimaryKeyIndex] == iColumnCounter)
{
iIsPrimaryKey = 1;
break;
}
}
if (iIsPrimaryKey ? (eKeyUsage != PRIMARY) :
(eKeyUsage != NONPRIMARY))
{
/**
* Don't use.
*/
debug_msg("dbmirror:packageData skipping column");
continue;
}
} /* KeyUsage!=ALL */
if (tTupleDesc->attrs[iColumnCounter - 1]->attisdropped)
{
/**
* This column has been dropped.
* Do not mirror it.
*/
continue;
}
cpFieldName = DatumGetPointer(NameGetDatum
(&tTupleDesc->attrs
[iColumnCounter - 1]->attname));
debug_msg2("dbmirror:packageData field name: %s", cpFieldName);
while (iDataBlockSize - iUsedDataBlock <
strlen(cpFieldName) + 6)
{
cpDataBlock = SPI_repalloc(cpDataBlock,
iDataBlockSize +
BUFFER_SIZE);
iDataBlockSize = iDataBlockSize + BUFFER_SIZE;
}
sprintf(cpDataBlock + iUsedDataBlock, "\"%s\"=", cpFieldName);
iUsedDataBlock = iUsedDataBlock + strlen(cpFieldName) + 3;
cpFieldData = SPI_getvalue(tTupleData, tTupleDesc,
iColumnCounter);
cpUnFormatedPtr = cpFieldData;
cpFormatedPtr = cpDataBlock + iUsedDataBlock;
if (cpFieldData != NULL)
{
*cpFormatedPtr = '\'';
iUsedDataBlock++;
cpFormatedPtr++;
}
else
{
sprintf(cpFormatedPtr, " ");
iUsedDataBlock++;
cpFormatedPtr++;
continue;
}
debug_msg2("dbmirror:packageData field data: \"%s\"",
cpFieldData);
debug_msg("dbmirror:packageData starting format loop");
while (*cpUnFormatedPtr != 0)
{
while (iDataBlockSize - iUsedDataBlock < 2)
{
cpDataBlock = SPI_repalloc(cpDataBlock,
iDataBlockSize
+ BUFFER_SIZE);
iDataBlockSize = iDataBlockSize + BUFFER_SIZE;
cpFormatedPtr = cpDataBlock + iUsedDataBlock;
}
if (*cpUnFormatedPtr == '\\' || *cpUnFormatedPtr == '\'')
{
*cpFormatedPtr = *cpUnFormatedPtr;
cpFormatedPtr++;
iUsedDataBlock++;
}
*cpFormatedPtr = *cpUnFormatedPtr;
cpFormatedPtr++;
cpUnFormatedPtr++;
iUsedDataBlock++;
}
SPI_pfree(cpFieldData);
while (iDataBlockSize - iUsedDataBlock < 3)
{
cpDataBlock = SPI_repalloc(cpDataBlock,
iDataBlockSize +
BUFFER_SIZE);
iDataBlockSize = iDataBlockSize + BUFFER_SIZE;
cpFormatedPtr = cpDataBlock + iUsedDataBlock;
}
sprintf(cpFormatedPtr, "' ");
iUsedDataBlock = iUsedDataBlock + 2;
debug_msg2("dbmirror:packageData data block: \"%s\"",
cpDataBlock);
} /* for iColumnCounter */
if (tpPKeys != NULL)
SPI_pfree(tpPKeys);
debug_msg3("dbmirror:packageData returning DataBlockSize:%d iUsedDataBlock:%d",
iDataBlockSize,
iUsedDataBlock);
memset(cpDataBlock + iUsedDataBlock, 0, iDataBlockSize - iUsedDataBlock);
return cpDataBlock;
}
/*
* Support for mirroring sequence objects.
*/
PG_FUNCTION_INFO_V1(setval_mirror);
Datum
setval_mirror(PG_FUNCTION_ARGS)
{
Oid relid = PG_GETARG_OID(0);
int64 next = PG_GETARG_INT64(1);
int64 result;
result = DatumGetInt64(DirectFunctionCall2(setval_oid,
ObjectIdGetDatum(relid),
Int64GetDatum(next)));
saveSequenceUpdate(relid, result, true);
PG_RETURN_INT64(result);
}
PG_FUNCTION_INFO_V1(setval3_mirror);
Datum
setval3_mirror(PG_FUNCTION_ARGS)
{
Oid relid = PG_GETARG_OID(0);
int64 next = PG_GETARG_INT64(1);
bool iscalled = PG_GETARG_BOOL(2);
int64 result;
result = DatumGetInt64(DirectFunctionCall3(setval3_oid,
ObjectIdGetDatum(relid),
Int64GetDatum(next),
BoolGetDatum(iscalled)));
saveSequenceUpdate(relid, result, iscalled);
PG_RETURN_INT64(result);
}
PG_FUNCTION_INFO_V1(nextval_mirror);
Datum
nextval_mirror(PG_FUNCTION_ARGS)
{
Oid relid = PG_GETARG_OID(0);
int64 result;
result = DatumGetInt64(DirectFunctionCall1(nextval_oid,
ObjectIdGetDatum(relid)));
saveSequenceUpdate(relid, result, true);
PG_RETURN_INT64(result);
}
static void
saveSequenceUpdate(Oid relid, int64 nextValue, bool iscalled)
{
Oid insertArgTypes[2] = {NAMEOID, INT4OID};
Oid insertDataArgTypes[1] = {NAMEOID};
void *insertPlan;
void *insertDataPlan;
Datum insertDatum[2];
Datum insertDataDatum[1];
char nextSequenceText[64];
const char *insertQuery =
"INSERT INTO dbmirror_Pending (TableName,Op,XID) VALUES" \
"($1,'s',$2)";
const char *insertDataQuery =
"INSERT INTO dbmirror_PendingData(SeqId,IsKey,Data) VALUES " \
"(currval('dbmirror_pending_seqid_seq'),'t',$1)";
if (SPI_connect() < 0)
ereport(ERROR,
(errcode(ERRCODE_EXTERNAL_ROUTINE_EXCEPTION),
errmsg("dbmirror:savesequenceupdate could not connect to SPI")));
insertPlan = SPI_prepare(insertQuery, 2, insertArgTypes);
insertDataPlan = SPI_prepare(insertDataQuery, 1, insertDataArgTypes);
if (insertPlan == NULL || insertDataPlan == NULL)
ereport(ERROR,
(errcode(ERRCODE_EXTERNAL_ROUTINE_EXCEPTION),
errmsg("dbmirror:savesequenceupdate error creating plan")));
insertDatum[0] = PointerGetDatum(get_rel_name(relid));
insertDatum[1] = Int32GetDatum(GetCurrentTransactionId());
snprintf(nextSequenceText, sizeof(nextSequenceText),
INT64_FORMAT ",'%c'",
nextValue, iscalled ? 't' : 'f');
/*
* note type cheat here: we prepare a C string and then claim it is a
* NAME, which the system will coerce to varchar for us.
*/
insertDataDatum[0] = PointerGetDatum(nextSequenceText);
debug_msg2("dbmirror:savesequenceupdate: Setting value as %s",
nextSequenceText);
debug_msg("dbmirror:About to execute insert query");
if (SPI_execp(insertPlan, insertDatum, NULL, 1) != SPI_OK_INSERT)
ereport(ERROR,
(errcode(ERRCODE_EXTERNAL_ROUTINE_EXCEPTION),
errmsg("error inserting row in dbmirror_Pending")));
if (SPI_execp(insertDataPlan, insertDataDatum, NULL, 1) != SPI_OK_INSERT)
ereport(ERROR,
(errcode(ERRCODE_EXTERNAL_ROUTINE_EXCEPTION),
errmsg("error inserting row in dbmirror_PendingData")));
debug_msg("dbmirror:Insert query finished");
SPI_pfree(insertPlan);
SPI_pfree(insertDataPlan);
SPI_finish();
}

View File

@ -1,35 +0,0 @@
#########################################################################
# Config file for DBMirror.pl
# This file contains a sample configuration file for DBMirror.pl
# It contains configuration information to mirror data from
# the master database to a single slave system.
#
# $PostgreSQL: pgsql/contrib/dbmirror/slaveDatabase.conf,v 1.3 2004/09/10 04:31:06 neilc Exp $
#######################################################################
$masterHost = "masterMachine.mydomain.com";
$masterDb = "myDatabase";
$masterUser = "postgres";
$masterPassword = "postgrespassword";
# Where to email Error messages to
# $errorEmailAddr = "me@mydomain.com";
$slaveInfo->{"slaveName"} = "backupMachine";
$slaveInfo->{"slaveHost"} = "backupMachine.mydomain.com";
$slaveInfo->{"slaveDb"} = "myDatabase";
$slaveInfo->{"slavePort"} = 5432;
$slaveInfo->{"slaveUser"} = "postgres";
$slaveInfo->{"slavePassword"} = "postgrespassword";
# If uncommented then text files with SQL statements are generated instead
# of connecting to the slave database directly.
# slaveDb should then be commented out.
# $slaveInfo{"TransactionFileDirectory"} = '/tmp';
#
# The number of seconds dbmirror should sleep for between checking to see
# if more data is ready to be mirrored.
$sleepInterval = 60;
#If you want to use syslog
# $syslog = 1;

View File

@ -1,16 +0,0 @@
# $PostgreSQL: pgsql/contrib/fulltextindex/Makefile,v 1.14 2005/09/27 17:13:02 tgl Exp $
MODULES = fti
DATA_built = fti.sql
DOCS = README.fti
SCRIPTS = fti.pl
ifdef USE_PGXS
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
else
subdir = contrib/fulltextindex
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif

View File

@ -1,200 +0,0 @@
An attempt at some sort of Full Text Indexing for PostgreSQL.
The included software is an attempt to add some sort of Full Text Indexing
support to PostgreSQL. I mean by this that we can ask questions like:
Give me all rows that have 'still' and 'nash' in the 'artist' or 'title'
fields.
Ofcourse we can write this as:
select * from cds where (artist ~* 'stills' or title ~* 'stills') and
(artist ~* 'nash' or title ~* 'nash');
But this does not use any indices, and therefore, if your database
gets very large, it will not have very high performance (the above query
requires a sequential scan of the table).
The approach used by this add-on is to define a trigger on the table and
columns you want to do these queries on. On every insert to the table, it
takes the value in the specified columns, breaks the text in these columns
up into pieces, and stores all sub-strings into another table, together
with a reference to the row in the original table that contained this
sub-string (it uses the oid of that row).
By now creating an index over the 'fti-table', we can search for
substrings that occur in the original table. By making a join between
the fti-table and the orig-table, we can get the actual rows we want
(this can also be done by using subselects - but subselects are currently
inefficient in PostgreSQL, and maybe there're other ways too).
The trigger code also allows an array called StopWords, that prevents
certain words from being indexed.
As an example we take the previous query, where we assume we have all
sub-strings in the table 'cds-fti':
select c.*
from cds c, cds-fti f1, cds-fti f2
where f1.string ~ '^stills' and
f2.string ~ '^nash' and
f1.id = c.oid and
f2.id = c.oid ;
We can use the ~ (case-sensitive regular expression) here, because of
the way sub-strings are built: from right to left, ie. house -> 'se' +
'use' + 'ouse' + 'house'. If a ~ search starts with a ^ (match start of
string), btree indices can be used by PostgreSQL.
Now, how do we create the trigger that maintains the fti-table? First: the
fti-table should have the following schema:
create cds-fti ( string varchar(N), id oid ) without oids;
Don't change the *names* of the columns, the varchar() can in fact also
be of text-type. If you do use varchar, make sure the largest possible
sub-string will fit.
The create the function that contains the trigger::
create function fti() returns trigger as
'/path/to/fti.so' language C;
And finally define the trigger on the 'cds' table:
create trigger cds-fti-trigger after update or insert or delete on cds
for each row execute procedure fti(cds-fti, artist, title);
Here, the trigger will be defined on table 'cds', it will create
sub-strings from the fields 'artist' and 'title', and it will place
those sub-strings in the table 'cds-fti'.
Now populate the table 'cds'. This will also populate the table 'cds-fti'.
It's fastest to populate the table *before* you create the indices. Use the
supplied 'fti.pl' to assist you with this.
Before you start using the system, you should at least have the following
indices:
create index cds-fti-idx on cds-fti (string); -- String matching
create index cds-fti-idx on cds-fti (id); -- For deleting a cds row
create index cds-oid-idx on cds (oid); -- For joining cds to cds-fti
To get the most performance out of this, you should have 'cds-fti'
clustered on disk, ie. all rows with the same sub-strings should be
close to each other. There are 3 ways of doing this:
1. After you have created the indices, execute 'cluster cds-fti-idx on cds-fti'.
2. Do a 'select * into tmp-table from cds-fti order by string' *before*
you create the indices, then 'drop table cds-fti' and
'alter table tmp-table rename to cds-fti'
3. *Before* creating indices, dump the contents of the cds-fti table using
'pg_dump -a -t cds-fti dbase-name', remove the \connect
from the beginning and the \. from the end, and sort it using the
UNIX 'sort' program, and reload the data.
Method 1 is very slow, 2 a lot faster, and for very large tables, 3 is
preferred.
BENCH:
~~~~~
Maarten Boekhold <maartenb@dutepp0.et.tudelft.nl>
The following data was generated by the 'timings.sh' script included
in this directory. It uses a very large table with music-related
articles as a source for the fti-table. The tables used are:
product : contains product information : 540.429 rows
artist_fti : fti table for product : 4.501.321 rows
clustered : same as above, only clustered : 4.501.321 rows
A sequential scan of the artist_fti table (and thus also the clustered table)
takes around 6:16 minutes....
Unfortunately I cannot provide anybody else with this test-data, since I
am not allowed to redistribute the data (it's a database being sold by
a couple of wholesale companies). Anyways, it's megabytes, so you probably
wouldn't want it in this distribution anyways.
I haven't tested this with less data.
The test-machine is a Pentium 133, 64 MB, Linux 2.0.32 with the database
on a 'QUANTUM BIGFOOT_CY4320A, 4134MB w/67kB Cache, CHS=8960/15/63'. This
is a very slow disk.
The postmaster was running with:
postmaster -i -b /usr/local/pgsql/bin/postgres -S 1024 -B 256 \
-o -o /usr/local/pgsql/debug-output -F -d 1
('trashing' means a 'select count(*) from artist_fti' to completely trash
any disk-caches and buffers....)
TESTING ON UNCLUSTERED FTI
trashing
1: ^lapton and ^ric : 0.050u 0.000s 5m37.484s 0.01%
2: ^lapton and ^ric : 0.050u 0.030s 5m32.447s 0.02%
3: ^lapton and ^ric : 0.030u 0.020s 5m28.822s 0.01%
trashing
1: ^lling and ^tones : 0.020u 0.030s 0m54.313s 0.09%
2: ^lling and ^tones : 0.040u 0.030s 0m5.057s 1.38%
3: ^lling and ^tones : 0.010u 0.050s 0m2.072s 2.89%
trashing
1: ^aughan and ^evie : 0.020u 0.030s 0m26.241s 0.19%
2: ^aughan and ^evie : 0.050u 0.010s 0m1.316s 4.55%
3: ^aughan and ^evie : 0.030u 0.020s 0m1.029s 4.85%
trashing
1: ^lling : 0.040u 0.010s 0m55.104s 0.09%
2: ^lling : 0.030u 0.030s 0m4.716s 1.27%
3: ^lling : 0.040u 0.010s 0m2.157s 2.31%
trashing
1: ^stev and ^ray and ^vaugh : 0.040u 0.000s 1m5.630s 0.06%
2: ^stev and ^ray and ^vaugh : 0.050u 0.020s 1m3.561s 0.11%
3: ^stev and ^ray and ^vaugh : 0.050u 0.010s 1m5.923s 0.09%
trashing
1: ^lling (no join) : 0.050u 0.020s 0m24.139s 0.28%
2: ^lling (no join) : 0.040u 0.040s 0m1.087s 7.35%
3: ^lling (no join) : 0.020u 0.030s 0m0.772s 6.48%
trashing
1: ^vaughan (no join) : 0.040u 0.030s 0m9.075s 0.77%
2: ^vaughan (no join) : 0.030u 0.010s 0m0.609s 6.56%
3: ^vaughan (no join) : 0.040u 0.010s 0m0.503s 9.94%
trashing
1: ^rol (no join) : 0.020u 0.030s 0m49.898s 0.10%
2: ^rol (no join) : 0.030u 0.020s 0m3.136s 1.59%
3: ^rol (no join) : 0.030u 0.020s 0m1.231s 4.06%
TESTING ON CLUSTERED FTI
trashing
1: ^lapton and ^ric : 0.020u 0.020s 2m17.120s 0.02%
2: ^lapton and ^ric : 0.030u 0.020s 2m11.767s 0.03%
3: ^lapton and ^ric : 0.040u 0.010s 2m8.128s 0.03%
trashing
1: ^lling and ^tones : 0.020u 0.030s 0m18.179s 0.27%
2: ^lling and ^tones : 0.030u 0.010s 0m1.897s 2.10%
3: ^lling and ^tones : 0.040u 0.010s 0m1.619s 3.08%
trashing
1: ^aughan and ^evie : 0.070u 0.010s 0m11.765s 0.67%
2: ^aughan and ^evie : 0.040u 0.010s 0m1.198s 4.17%
3: ^aughan and ^evie : 0.030u 0.020s 0m0.872s 5.73%
trashing
1: ^lling : 0.040u 0.000s 0m28.623s 0.13%
2: ^lling : 0.030u 0.010s 0m2.339s 1.70%
3: ^lling : 0.030u 0.010s 0m1.975s 2.02%
trashing
1: ^stev and ^ray and ^vaugh : 0.020u 0.010s 0m17.667s 0.16%
2: ^stev and ^ray and ^vaugh : 0.030u 0.010s 0m3.745s 1.06%
3: ^stev and ^ray and ^vaugh : 0.030u 0.020s 0m3.439s 1.45%
trashing
1: ^lling (no join) : 0.020u 0.040s 0m2.218s 2.70%
2: ^lling (no join) : 0.020u 0.020s 0m0.506s 7.90%
3: ^lling (no join) : 0.030u 0.030s 0m0.510s 11.76%
trashing
1: ^vaughan (no join) : 0.040u 0.050s 0m2.048s 4.39%
2: ^vaughan (no join) : 0.030u 0.020s 0m0.332s 15.04%
3: ^vaughan (no join) : 0.040u 0.010s 0m0.318s 15.72%
trashing
1: ^rol (no join) : 0.020u 0.030s 0m2.384s 2.09%
2: ^rol (no join) : 0.020u 0.030s 0m0.676s 7.39%
3: ^rol (no join) : 0.020u 0.030s 0m0.697s 7.17%

View File

@ -1 +0,0 @@
Place "stop" words in lookup table

View File

@ -1,25 +0,0 @@
WARNING
-------
This implementation of full text indexing is very slow and inefficient. It is
STRONGLY recommended that you switch to using contrib/tsearch which offers these
features:
Advantages
----------
* Actively developed and improved
* Tight integration with OpenFTS (openfts.sourceforge.net)
* Orders of magnitude faster (eg. 300 times faster for two keyword search)
* No extra tables or multi-way joins required
* Select syntax allows easy 'and'ing, 'or'ing and 'not'ing of keywords
* Built-in stemmer with customisable dictionaries (ie. searching for 'jellies' will find 'jelly')
* Stop words automatically ignored
* Supports non-C locales
Disadvantages
-------------
* Only indexes full words - substring searches on words won't work.
eg. Searching for 'burg' won't find 'burger'
Due to the deficiencies in this module, it is quite likely that it will be removed from the standard PostgreSQL distribution in the future.

View File

@ -1,468 +0,0 @@
#include "postgres.h"
#include <ctype.h>
#include "executor/spi.h"
#include "commands/trigger.h"
/*
* Trigger function accepts variable number of arguments:
*
* $PostgreSQL: pgsql/contrib/fulltextindex/fti.c,v 1.27 2006/05/30 22:12:12 tgl Exp $
*
* 1. relation in which to store the substrings
* 2. fields to extract substrings from
*
* The relation in which to insert *must* have the following layout:
*
* string varchar(#)
* id oid
*
* where # is the largest size of the varchar columns being indexed
*
* Example:
*
* -- Create the SQL function based on the compiled shared object
* create function fti() returns trigger as
* '/usr/local/pgsql/lib/contrib/fti.so' language C;
*
* -- Create the FTI table
* create table product_fti (string varchar(255), id oid) without oids;
*
* -- Create an index to assist string matches
* create index product_fti_string_idx on product_fti (string);
*
* -- Create an index to assist trigger'd deletes
* create index product_fti_id_idx on product_fti (id);
*
* -- Create an index on the product oid column to assist joins
* -- between the fti table and the product table
* create index product_oid_idx on product (oid);
*
* -- Create the trigger to perform incremental changes to the full text index.
* create trigger product_fti_trig after update or insert or delete on product
* for each row execute procedure fti(product_fti, title, artist);
* ^^^^^^^^^^^
* table where full text index is stored
* ^^^^^^^^^^^^^
* columns to index in the base table
*
* After populating 'product', try something like:
*
* SELECT DISTINCT(p.*) FROM product p, product_fti f1, product_fti f2 WHERE
* f1.string ~ '^slippery' AND f2.string ~ '^wet' AND p.oid=f1.id AND p.oid=f2.id;
*
* To check that your indicies are being used correctly, make sure you
* EXPLAIN SELECT ... your test query above.
*
* CHANGELOG
* ---------
*
* august 3 2001
* Extended fti function to accept more than one column as a
* parameter and all specified columns are indexed. Changed
* all uses of sprintf to snprintf. Made error messages more
* consistent.
*
* march 4 1998 Changed breakup() to return less substrings. Only breakup
* in word parts which are in turn shortened from the start
* of the word (ie. word, ord, rd)
* Did allocation of substring buffer outside of breakup()
*
* oct. 5 1997, fixed a bug in string breakup (where there are more nonalpha
* characters between words then 1).
*
* oct 4-5 1997 implemented the thing, at least the basic functionallity
* of it all....
*
* TODO
* ----
*
* prevent generating duplicate words for an oid in the fti table
* save a plan for deletes
* create a function that will make the index *after* we have populated
* the main table (probably first delete all contents to be sure there's
* nothing in it, then re-populate the fti-table)
*
* can we do something with operator overloading or a seperate function
* that can build the final query automagically?
*/
PG_MODULE_MAGIC;
#define MAX_FTI_QUERY_LENGTH 8192
extern Datum fti(PG_FUNCTION_ARGS);
static char *breakup(char *, char *);
static bool is_stopword(char *);
static bool new_tuple = false;
#ifdef USE_STOP_WORDS
/* THIS LIST MUST BE IN SORTED ORDER, A BINARY SEARCH IS USED!!!! */
char *StopWords[] = { /* list of words to skip in indexing */
"no",
"the",
"yes"
};
#endif /* USE_STOP_WORDS */
/* stuff for caching query-plans, stolen from contrib/spi/\*.c */
typedef struct
{
char *ident;
int nplans;
void **splan;
} EPlan;
static EPlan *InsertPlans = NULL;
static EPlan *DeletePlans = NULL;
static int nInsertPlans = 0;
static int nDeletePlans = 0;
static EPlan *find_plan(char *ident, EPlan ** eplan, int *nplans);
/***********************************************************************/
PG_FUNCTION_INFO_V1(fti);
Datum
fti(PG_FUNCTION_ARGS)
{
TriggerData *trigdata;
Trigger *trigger; /* to get trigger name */
int nargs; /* # of arguments */
char **args; /* arguments */
char *relname; /* triggered relation name */
Relation rel; /* triggered relation */
char *indexname; /* name of table for substrings */
HeapTuple rettuple = NULL;
TupleDesc tupdesc; /* tuple description */
bool isinsert = false;
bool isdelete = false;
int ret;
char query[MAX_FTI_QUERY_LENGTH];
Oid oid;
/*
* FILE *debug;
*/
/*
* debug = fopen("/dev/xconsole", "w"); fprintf(debug, "FTI: entered
* function\n"); fflush(debug);
*/
if (!CALLED_AS_TRIGGER(fcinfo))
/* internal error */
elog(ERROR, "not fired by trigger manager");
/* It's safe to cast now that we've checked */
trigdata = (TriggerData *) fcinfo->context;
if (TRIGGER_FIRED_FOR_STATEMENT(trigdata->tg_event))
ereport(ERROR,
(errcode(ERRCODE_TRIGGERED_ACTION_EXCEPTION),
errmsg("can't process STATEMENT events")));
if (TRIGGER_FIRED_BEFORE(trigdata->tg_event))
ereport(ERROR,
(errcode(ERRCODE_TRIGGERED_ACTION_EXCEPTION),
errmsg("must be fired AFTER event")));
if (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event))
isinsert = true;
if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))
{
isdelete = true;
isinsert = true;
}
if (TRIGGER_FIRED_BY_DELETE(trigdata->tg_event))
isdelete = true;
trigger = trigdata->tg_trigger;
rel = trigdata->tg_relation;
relname = SPI_getrelname(rel);
rettuple = trigdata->tg_trigtuple;
if (isdelete && isinsert) /* is an UPDATE */
rettuple = trigdata->tg_newtuple;
if ((ret = SPI_connect()) < 0)
/* internal error */
elog(ERROR, "SPI_connect failed, returned %d", ret);
nargs = trigger->tgnargs;
if (nargs < 2)
ereport(ERROR,
(errcode(ERRCODE_TRIGGERED_ACTION_EXCEPTION),
errmsg("fti trigger must have at least 2 arguments")));
args = trigger->tgargs;
indexname = args[0];
tupdesc = rel->rd_att; /* what the tuple looks like (?) */
/* get oid of current tuple, needed by all, so place here */
oid = HeapTupleGetOid(rettuple);
if (!OidIsValid(oid))
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_COLUMN),
errmsg("OID is not present"),
errhint("Full Text Index requires indexed tables be created WITH OIDS.")));
if (isdelete)
{
void *pplan;
Oid *argtypes;
Datum values[1];
EPlan *plan;
int i;
snprintf(query, MAX_FTI_QUERY_LENGTH, "D%s", indexname);
for (i = 1; i < nargs; i++)
snprintf(query, MAX_FTI_QUERY_LENGTH, "%s$%s", query, args[i]);
plan = find_plan(query, &DeletePlans, &nDeletePlans);
if (plan->nplans <= 0)
{
argtypes = (Oid *) palloc(sizeof(Oid));
argtypes[0] = OIDOID;
snprintf(query, MAX_FTI_QUERY_LENGTH, "DELETE FROM %s WHERE id = $1", indexname);
pplan = SPI_prepare(query, 1, argtypes);
if (!pplan)
/* internal error */
elog(ERROR, "SPI_prepare returned NULL in delete");
pplan = SPI_saveplan(pplan);
if (pplan == NULL)
/* internal error */
elog(ERROR, "SPI_saveplan returned NULL in delete");
plan->splan = (void **) malloc(sizeof(void *));
*(plan->splan) = pplan;
plan->nplans = 1;
}
values[0] = oid;
ret = SPI_execp(*(plan->splan), values, NULL, 0);
if (ret != SPI_OK_DELETE)
ereport(ERROR,
(errcode(ERRCODE_TRIGGERED_ACTION_EXCEPTION),
errmsg("error executing delete")));
}
if (isinsert)
{
char *substring;
char *column;
void *pplan;
Oid *argtypes;
Datum values[2];
int colnum;
struct varlena *data;
EPlan *plan;
int i;
char *buff;
char *string;
snprintf(query, MAX_FTI_QUERY_LENGTH, "I%s", indexname);
for (i = 1; i < nargs; i++)
snprintf(query, MAX_FTI_QUERY_LENGTH, "%s$%s", query, args[i]);
plan = find_plan(query, &InsertPlans, &nInsertPlans);
/* no plan yet, so allocate mem for argtypes */
if (plan->nplans <= 0)
{
argtypes = (Oid *) palloc(2 * sizeof(Oid));
argtypes[0] = VARCHAROID; /* create table t_name (string
* varchar, */
argtypes[1] = OIDOID; /* id oid); */
/* prepare plan to gain speed */
snprintf(query, MAX_FTI_QUERY_LENGTH, "INSERT INTO %s (string, id) VALUES ($1, $2)",
indexname);
pplan = SPI_prepare(query, 2, argtypes);
if (!pplan)
/* internal error */
elog(ERROR, "SPI_prepare returned NULL in insert");
pplan = SPI_saveplan(pplan);
if (pplan == NULL)
/* internal error */
elog(ERROR, "SPI_saveplan returned NULL in insert");
plan->splan = (void **) malloc(sizeof(void *));
*(plan->splan) = pplan;
plan->nplans = 1;
}
/* prepare plan for query */
for (i = 0; i < nargs - 1; i++)
{
colnum = SPI_fnumber(tupdesc, args[i + 1]);
if (colnum == SPI_ERROR_NOATTRIBUTE)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_COLUMN),
errmsg("column \"%s\" of \"%s\" does not exist",
args[i + 1], indexname)));
/* Get the char* representation of the column */
column = SPI_getvalue(rettuple, tupdesc, colnum);
/* make sure we don't try to index NULL's */
if (column)
{
string = column;
while (*string != '\0')
{
*string = tolower((unsigned char) *string);
string++;
}
data = (struct varlena *) palloc(sizeof(int32) + strlen(column) +1);
buff = palloc(strlen(column) + 1);
/* saves lots of calls in while-loop and in breakup() */
new_tuple = true;
while ((substring = breakup(column, buff)))
{
int l;
l = strlen(substring);
data->vl_len = l + sizeof(int32);
memcpy(VARDATA(data), substring, l);
values[0] = PointerGetDatum(data);
values[1] = oid;
ret = SPI_execp(*(plan->splan), values, NULL, 0);
if (ret != SPI_OK_INSERT)
ereport(ERROR,
(errcode(ERRCODE_TRIGGERED_ACTION_EXCEPTION),
errmsg("error executing insert")));
}
pfree(buff);
pfree(data);
}
}
}
SPI_finish();
return PointerGetDatum(rettuple);
}
static char *
breakup(char *string, char *substring)
{
static char *last_start;
static char *cur_pos;
if (new_tuple)
{
cur_pos = last_start = &string[strlen(string) - 1];
new_tuple = false; /* don't initialize this next time */
}
while (cur_pos > string) /* don't read before start of 'string' */
{
/*
* skip pieces at the end of a string that are not alfa-numeric (ie.
* 'string$%^&', last_start first points to '&', and after this to 'g'
*/
if (!isalnum((unsigned char) *last_start))
{
while (!isalnum((unsigned char) *last_start) &&
last_start > string)
last_start--;
cur_pos = last_start;
}
cur_pos--; /* substrings are at minimum 2 characters long */
if (isalnum((unsigned char) *cur_pos))
{
/* Houston, we have a substring! :) */
memcpy(substring, cur_pos, last_start - cur_pos + 1);
substring[last_start - cur_pos + 1] = '\0';
if (!is_stopword(substring))
return substring;
}
else
{
last_start = cur_pos - 1;
cur_pos = last_start;
}
}
return NULL; /* we've processed all of 'string' */
}
/* copied from src/backend/parser/keywords.c and adjusted for our situation*/
static bool
is_stopword(char *text)
{
#ifdef USE_STOP_WORDS
char **StopLow; /* for list of stop-words */
char **StopHigh;
char **StopMiddle;
int difference;
StopLow = &StopWords[0]; /* initialize stuff for binary search */
StopHigh = endof(StopWords);
/* Loop invariant: *StopLow <= text < *StopHigh */
while (StopLow < StopHigh)
{
StopMiddle = StopLow + (StopHigh - StopLow) / 2;
difference = strcmp(*StopMiddle, text);
if (difference == 0)
return (true);
else if (difference < 0)
StopLow = StopMiddle + 1;
else
StopHigh = StopMiddle;
}
#endif /* USE_STOP_WORDS */
return (false);
}
/* for caching of query plans, stolen from contrib/spi/\*.c */
static EPlan *
find_plan(char *ident, EPlan ** eplan, int *nplans)
{
EPlan *newp;
int i;
if (*nplans > 0)
{
for (i = 0; i < *nplans; i++)
{
if (strcmp((*eplan)[i].ident, ident) == 0)
break;
}
if (i != *nplans)
return (*eplan + i);
*eplan = (EPlan *) realloc(*eplan, (i + 1) * sizeof(EPlan));
newp = *eplan + i;
}
else
{
newp = *eplan = (EPlan *) malloc(sizeof(EPlan));
(*nplans) = i = 0;
}
newp->ident = (char *) malloc(strlen(ident) + 1);
strcpy(newp->ident, ident);
newp->nplans = 0;
newp->splan = NULL;
(*nplans)++;
return (newp);
}

View File

@ -1,212 +0,0 @@
#!/usr/bin/perl
#
# $PostgreSQL: pgsql/contrib/fulltextindex/fti.pl,v 1.9 2006/03/11 04:38:29 momjian Exp $
#
# This script substracts all suffixes of all words in a specific column in a table
# and generates output that can be loaded into a new table with the
# psql '\copy' command. The new table should have the following structure:
#
# create table tab (
# string text,
# id oid
# );
#
# Note that you cannot use 'copy' (the SQL-command) directly, because
# there's no '\.' included at the end of the output.
#
# The output can be fed through the UNIX commands 'uniq' and 'sort'
# to generate the smallest and sorted output to populate the fti-table.
#
# Example:
#
# fti.pl -u -d mydb -t mytable -c mycolumn,mycolumn2 -f myfile
# sort -o myoutfile myfile
# uniq myoutfile sorted-file
#
# psql -u mydb
#
# \copy my_fti_table from myfile
#
# create index fti_idx on my_fti_table (string,id);
#
# create function fti() returns trigger as
# '/path/to/fti/file/fti.so'
# language C;
#
# create trigger my_fti_trigger after update or insert or delete
# on mytable
# for each row execute procedure fti(my_fti_table, mycolumn);
#
# Make sure you have an index on mytable(oid) to be able to do somewhat
# efficient substring searches.
#use lib '/usr/local/pgsql/lib/perl5/';
use lib '/mnt/web/guide/postgres/lib/perl5/site_perl';
use Pg;
use Getopt::Std;
$PGRES_EMPTY_QUERY = 0 ;
$PGRES_COMMAND_OK = 1 ;
$PGRES_TUPLES_OK = 2 ;
$PGRES_COPY_OUT = 3 ;
$PGRES_COPY_IN = 4 ;
$PGRES_BAD_RESPONSE = 5 ;
$PGRES_NONFATAL_ERROR = 6 ;
$PGRES_FATAL_ERROR = 7 ;
# the minimum length of word to include in the full text index
$MIN_WORD_LENGTH = 2;
# the minimum length of the substrings in the full text index
$MIN_SUBSTRING_LENGTH = 2;
$[ = 0; # make sure string offsets start at 0
sub break_up {
my $string = pop @_;
# convert strings to lower case
$string = lc($string);
@strings = split(/\W+/, $string);
@subs = ();
foreach $s (@strings) {
$len = length($s);
next if ($len <= $MIN_WORD_LENGTH);
for ($i = 0; $i <= $len - $MIN_SUBSTRING_LENGTH; $i++) {
$tmp = substr($s, $i);
push(@subs, $tmp);
}
}
return @subs;
}
sub connect_db {
my $dbname = shift @_;
my $user = shift @_;
my $passwd = shift @_;
if (!defined($dbname) || $dbname eq "") {
return 1;
}
$connect_string = "dbname=$dbname";
if ($user ne "") {
if ($passwd eq "") {
return 0;
}
$connect_string = "$connect_string user=$user password=$passwd ".
"authtype=password";
}
$PG_CONN = PQconnectdb($connect_string);
if (PQstatus($PG_CONN)) {
print STDERR "Couldn't make connection with database!\n";
print STDERR PQerrorMessage($PG_CONN), "\n";
return 0;
}
return 1;
}
sub quit_prog {
close(OUT);
unlink $opt_f;
if (defined($PG_CONN)) {
PQfinish($PG_CONN);
}
exit 1;
}
sub get_username {
print "Username: ";
chop($n = <STDIN>);
return $n;;
}
sub get_password {
print "Password: ";
system("stty -echo < /dev/tty");
chop($pwd = <STDIN>);
print "\n";
system("stty echo < /dev/tty");
return $pwd;
}
sub main {
getopts('d:t:c:f:u');
if (!$opt_d || !$opt_t || !$opt_c || !$opt_f) {
print STDERR "usage: $0 [-u] -d database -t table -c column[,column...] ".
"-f output-file\n";
return 1;
}
@cols = split(/,/, $opt_c);
if (defined($opt_u)) {
$uname = get_username();
$pwd = get_password();
} else {
$uname = "";
$pwd = "";
}
$SIG{'INT'} = 'quit_prog';
if (!connect_db($opt_d, $uname, $pwd)) {
print STDERR "Connecting to database failed!\n";
return 1;
}
if (!open(OUT, ">$opt_f")) {
print STDERR "Couldnt' open file '$opt_f' for output!\n";
return 1;
}
PQexec($PG_CONN, "SET search_path = public");
PQexec($PG_CONN, "begin");
$query = "declare C cursor for select (\"";
$query .= join("\" || ' ' || \"", @cols);
$query .= "\") as string, oid from $opt_t";
$res = PQexec($PG_CONN, $query);
if (!$res || (PQresultStatus($res) != $PGRES_COMMAND_OK)) {
print STDERR "Error declaring cursor!\n";
print STDERR PQerrorMessage($PG_CONN), "\n";
PQfinish($PG_CONN);
return 1;
}
PQclear($res);
$query = "fetch in C";
while (($res = PQexec($PG_CONN, $query)) &&
(PQresultStatus($res) == $PGRES_TUPLES_OK) &&
(PQntuples($res) == 1)) {
$col = PQgetvalue($res, 0, 0);
$oid = PQgetvalue($res, 0, 1);
@subs = break_up($col);
foreach $i (@subs) {
print OUT "$i\t$oid\n";
}
}
if (!$res || (PQresultStatus($res) != PGRES_TUPLES_OK)) {
print STDERR "Error retrieving data from backend!\n";
print STDERR PQerrorMEssage($PG_CONN), "\n";
PQfinish($PG_CONN);
return 1;
}
PQclear($res);
PQfinish($PG_CONN);
return 0;
}
exit main();

View File

@ -1,6 +0,0 @@
-- Adjust this setting to control where the objects get created.
SET search_path = public;
CREATE OR REPLACE FUNCTION fti() RETURNS trigger AS
'MODULE_PATHNAME', 'fti'
LANGUAGE C VOLATILE CALLED ON NULL INPUT;

View File

@ -1,350 +0,0 @@
#!/bin/sh
PATH=${PATH}:/usr/local/pgsql/bin
TIMEFORMAT="%3Uu %3Ss %lR %P%%"
export PATH TIMEFORMAT
case "$1" in
-n)
trashing=0
;;
*)
trashing=1
;;
esac
echo "TESTING ON UNCLUSTERED FTI"
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(p.oid) from product p, artist_fti f1, artist_fti f2
where
f1.string ~ '^lapton' and f2.string ~ '^ric' and
f1.id=p.oid and f2.id=p.oid;"
echo -n "1: ^lapton and ^ric : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^lapton and ^ric : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^lapton and ^ric : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(p.oid) from product p, artist_fti f1, artist_fti f2
where
f1.string ~ '^lling' and f2.string ~ '^tones' and
f1.id=p.oid and f2.id=p.oid;"
echo -n "1: ^lling and ^tones : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^lling and ^tones : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^lling and ^tones : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(p.oid) from product p, artist_fti f1, artist_fti f2
where
f1.string ~ '^aughan' and f2.string ~ '^evie' and
f1.id=p.oid and f2.id=p.oid;"
echo -n "1: ^aughan and ^evie : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^aughan and ^evie : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^aughan and ^evie : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(p.oid) from product p, artist_fti f1
where
f1.string ~ '^lling' and
p.oid=f1.id;"
echo -n "1: ^lling : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^lling : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^lling : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(p.oid) from product p, artist_fti f1, artist_fti f2, artist_fti f3
where
f1.string ~ '^stev' and
f2.string ~ '^ray' and
f3.string ~ '^vaugh' and
p.oid=f1.id and p.oid=f2.id and p.oid=f3.id;"
echo -n "1: ^stev and ^ray and ^vaugh : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^stev and ^ray and ^vaugh : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^stev and ^ray and ^vaugh : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(*) from artist_fti where string ~ '^lling';"
echo -n "1: ^lling (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^lling (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^lling (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(*) from artist_fti where string ~ '^vaughan';"
echo -n "1: ^vaughan (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^vaughan (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^vaughan (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(*) from artist_fti where string ~ '^rol';"
echo -n "1: ^rol (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^rol (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^rol (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
echo
echo "TESTING ON CLUSTERED FTI"
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(p.oid) from product p, clustered f1, clustered f2
where
f1.string ~ '^lapton' and f2.string ~ '^ric' and
f1.id=p.oid and f2.id=p.oid;"
echo -n "1: ^lapton and ^ric : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^lapton and ^ric : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^lapton and ^ric : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(p.oid) from product p, clustered f1, clustered f2
where
f1.string ~ '^lling' and f2.string ~ '^tones' and
f1.id=p.oid and f2.id=p.oid;"
echo -n "1: ^lling and ^tones : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^lling and ^tones : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^lling and ^tones : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(p.oid) from product p, clustered f1, clustered f2
where
f1.string ~ '^aughan' and f2.string ~ '^evie' and
f1.id=p.oid and f2.id=p.oid;"
echo -n "1: ^aughan and ^evie : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^aughan and ^evie : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^aughan and ^evie : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(p.oid) from product p, clustered f1
where
f1.string ~ '^lling' and
p.oid=f1.id;"
echo -n "1: ^lling : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^lling : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^lling : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(p.oid) from product p, clustered f1, clustered f2, clustered f3
where
f1.string ~ '^stev' and
f2.string ~ '^ray' and
f3.string ~ '^vaugh' and
p.oid=f1.id and p.oid=f2.id and p.oid=f3.id;"
echo -n "1: ^stev and ^ray and ^vaugh : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^stev and ^ray and ^vaugh : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^stev and ^ray and ^vaugh : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(*) from clustered where string ~ '^lling';"
echo -n "1: ^lling (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^lling (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^lling (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(*) from clustered where string ~ '^vaughan';"
echo -n "1: ^vaughan (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^vaughan (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^vaughan (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
# trash disk
if [ $trashing = 1 ]
then
echo "trashing"
psql -q -n -o /dev/null -c "select count(*) from product;" test
else
echo
fi
Q="select count(*) from clustered where string ~ '^rol';"
echo -n "1: ^rol (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "2: ^rol (no join) : "
time psql -q -n -o /dev/null -c "$Q" test
echo -n "3: ^rol (no join) : "
time psql -q -n -o /dev/null -c "$Q" test

View File

@ -1,4 +0,0 @@
-- Adjust this setting to control where the objects get created.
SET search_path = public;
DROP FUNCTION fti() CASCADE;

View File

@ -1,21 +0,0 @@
#
# $PostgreSQL: pgsql/contrib/mSQL-interface/Makefile,v 1.12 2006/07/15 03:33:14 tgl Exp $
#
MODULE_big = mpgsql
SO_MAJOR_VERSION = 0
SO_MINOR_VERSION = 0
OBJS = mpgsql.o
DOCS = README.mpgsql
PG_CPPFLAGS = -I$(libpq_srcdir)
ifdef USE_PGXS
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
else
subdir = contrib/mSQL-interface
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif

View File

@ -1,86 +0,0 @@
Hello! :)
(Sorry for my english. But if i wrote in portuguese, you wouldn't
understand nothing. :])
I found it's the right place to post this. I'm a newcomer in these
lists. I hope i did it right. :]
<BOREDOM>
When i started using SQL, i started with mSQL. I developed a lot
of useful apps for me and my job with C, mainly because i loved it's
elegant, simple api. But for a large project i'm doing in these days, i
thought is was not enough, because it felt a lot of features i started to
need, like security and subselects. (and it's not free :))
So after looking at the options, choose to start again with
postgres. It offered everything that i needed, and the documentation is
really good (remember me to thank the one who wrote'em).
But for my little apps, i needed to start porting them to libpq.
After looking at pq's syntax, i found it was better to write a bridge
between the mSQL api and libpq. I found that rewriting the libmsql.a
routines that calls libpq would made things much easier. I guess the
results are quite good right now.
</BOREDOM>
Ok. Lets' summarize it:
mpgsql.c is the bridge. Acting as a wrapper, it's really good,
since i could run mSQL. But it's not accurate. Some highlights:
CONS:
* It's not well documented
(this post is it's first documentation attempt, in fact);
* It doesn't handle field types correctly. I plan to fix it,
if people start doing feedbacks;
* It's limited to 10 simultaneous connections. I plan to enhance
this, i'm just figuring out;
* I'd like to make it reentrant/thread safe, although i don't
think this could be done without changing the API structure;
* Error Management should be better. This is my first priority
now;
* Some calls are just empty implementations.
PROS:
* the mSQL Monitor runs Okay. :]
* It's really cool. :)
* Make mSQL-made applications compatible with postgresql just by
changing link options.
* Uses postgreSQL. :]
* the mSQL API it's far easier to use and understand than libpq.
Consider this example:
#include "msql.h"
void main(int argc, char **argv, char **envp) {
int sid;
sid = msqlConnect(NULL); /* Connects via unix socket */
if (sid >= 0) {
m_result *rlt;
m_row *row;
msqlSelectDB(sid, "hosts");
if (msqlQuery(sid, "select host_id from hosts")) {
rlt = msqlStoreResult();
while (row = (m_row*)msqlFetchRow(rlt))
printf("hostid: %s\n", row[0]);
msqlFreeResult(rlt);
}
msqlClose(sid);
}
}
I enclose mpgsql.c inside. I'd like to maintain it, and (maybe, am
i dreaming) make it as part of the pgsql distribution. I guess it doesn't
depends on me, but mainly on it's acceptance by its users.
Hm... i forgot: you'll need a msql.h copy, since it's copyrighted
by Hughes Technologies Pty Ltd. If you haven't it yes, fetch one
from www.hughes.com.au.
I would like to catch users ideas. My next goal is to add better
error handling, and to make it better documented, and try to let relshow
run through it. :)
done. Aldrin Leal <aldrin@americasnet.com>

View File

@ -1,372 +0,0 @@
/* $PostgreSQL: pgsql/contrib/mSQL-interface/mpgsql.c,v 1.8 2006/03/11 04:38:29 momjian Exp $ */
#include <time.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include "msql.h"
#include "libpq-fe.h"
#define HNDMAX 10
PGconn *PGh[HNDMAX] = {
NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL
};
#define E_NOHANDLERS 0
char *msqlErrors[] = {
"Out of database handlers."
};
char msqlErrMsg[BUFSIZ],
*tfrom = "dunno";
PGresult *queryres = NULL;
int
msqlConnect(char *host)
{
int count;
for (count = 0; count < HNDMAX; count++)
if (PGh[count] == NULL)
break;
if (count == HNDMAX)
{
strncpy(msqlErrMsg, msqlErrors[E_NOHANDLERS], BUFSIZ);
return -1;
}
PGh[count] = malloc(sizeof(PGconn));
PGh[count]->pghost = host ? strdup(host) : NULL;
return count;
}
int
msqlSelectDB(int handle, char *dbname)
{
char *options = calloc(1, BUFSIZ);
char *e = getenv("PG_OPTIONS");
if (e == NULL)
e = "";
if (PGh[handle]->pghost)
{
strcat(options, "host=");
strncat(options, PGh[handle]->pghost, BUFSIZ);
strncat(options, " ", BUFSIZ);
free(PGh[handle]->pghost);
PGh[handle]->pghost = NULL;
}
strncat(options, "dbname=", BUFSIZ);
strncat(options, dbname, BUFSIZ);
strncat(options, " ", BUFSIZ);
strncat(options, e, BUFSIZ);
free(PGh[handle]);
PGh[handle] = PQconnectdb(options);
free(options);
strncpy(msqlErrMsg, PQerrorMessage(PGh[handle]), BUFSIZ);
return (PQstatus(PGh[handle]) == CONNECTION_BAD ? -1 : 0);
}
int
msqlQuery(int handle, char *query)
{
char *tq = strdup(query);
char *p = tq;
PGresult *res;
PGconn *conn = PGh[handle];
ExecStatusType rcode;
res = PQexec(conn, p);
rcode = PQresultStatus(res);
if (rcode == PGRES_TUPLES_OK)
{
queryres = res;
return PQntuples(res);
}
else if (rcode == PGRES_FATAL_ERROR || rcode == PGRES_NONFATAL_ERROR)
{
PQclear(res);
queryres = NULL;
return -1;
}
else
{
PQclear(res);
queryres = NULL;
return 0;
}
}
int
msqlCreateDB(int a, char *b)
{
char tbuf[BUFSIZ];
snprintf(tbuf, BUFSIZ, "create database %s", b);
return msqlQuery(a, tbuf) >= 0 ? 0 : -1;
}
int
msqlDropDB(int a, char *b)
{
char tbuf[BUFSIZ];
snprintf(tbuf, BUFSIZ, "drop database %s", b);
return msqlQuery(a, tbuf) >= 0 ? 0 : -1;
}
int
msqlShutdown(int a)
{
}
int
msqlGetProtoInfo(void)
{
}
int
msqlReloadAcls(int a)
{
}
char *
msqlGetServerInfo(void)
{
}
char *
msqlGetHostInfo(void)
{
}
char *
msqlUnixTimeToDate(time_t date)
{
}
char *
msqlUnixTimeToTime(time_t time)
{
}
void
msqlClose(int a)
{
PQfinish(PGh[a]);
PGh[a] = NULL;
if (queryres)
{
free(queryres);
queryres = NULL;
}
}
void
msqlDataSeek(m_result * result, int count)
{
int c;
result->cursor = result->queryData;
for (c = 1; c < count; c++)
if (result->cursor->next)
result->cursor = result->cursor->next;
}
void
msqlFieldSeek(m_result * result, int count)
{
int c;
result->fieldCursor = result->fieldData;
for (c = 1; c < count; c++)
if (result->fieldCursor->next)
result->fieldCursor = result->fieldCursor->next;
}
void
msqlFreeResult(m_result * result)
{
if (result)
{
/* Clears fields */
free(result->fieldData);
result->cursor = result->queryData;
while (result->cursor)
{
int c;
m_row m = result->cursor->data;
for (c = 0; m[c]; c++)
free(m[c]);
result->cursor = result->cursor->next;
}
free(result->queryData);
free(result);
}
}
m_row
msqlFetchRow(m_result * row)
{
m_data *r = row->cursor;
if (r)
{
row->cursor = row->cursor->next;
return (m_row) r->data;
}
return (m_row) NULL;
}
m_seq *
msqlGetSequenceInfo(int a, char *b)
{
}
m_field *
msqlFetchField(m_result * mr)
{
m_field *m = (m_field *) mr->fieldCursor;
if (m)
{
mr->fieldCursor = mr->fieldCursor->next;
return m;
}
return NULL;
}
m_result *
msqlListDBs(int a)
{
m_result *m;
if (msqlQuery(a, "select datname from pg_database") > 0)
{
m = msqlStoreResult();
return m;
}
else
return NULL;
}
m_result *
msqlListTables(int a)
{
m_result *m;
char tbuf[BUFSIZ];
snprintf(tbuf, BUFSIZ,
"select relname from pg_class where relkind='r' and relowner=%d",
geteuid());
if (msqlQuery(a, tbuf) > 0)
{
m = msqlStoreResult();
return m;
}
else
return NULL;
}
m_result *
msqlListFields(int a, char *b)
{
}
m_result *
msqlListIndex(int a, char *b, char *c)
{
m_result *m;
char tbuf[BUFSIZ];
snprintf(tbuf, BUFSIZ,
"select relname from pg_class where relkind='i' and relowner=%d",
geteuid());
if (msqlQuery(a, tbuf) > 0)
{
m = msqlStoreResult();
return m;
}
else
return NULL;
}
m_result *
msqlStoreResult(void)
{
if (queryres)
{
m_result *mr = malloc(sizeof(m_result));
m_fdata *mf;
m_data *md;
int count;
mr->queryData = mr->cursor = NULL;
mr->numRows = PQntuples(queryres);
mr->numFields = PQnfields(queryres);
mf = calloc(PQnfields(queryres), sizeof(m_fdata));
for (count = 0; count < PQnfields(queryres); count++)
{
(m_fdata *) (mf + count)->field.name = strdup(PQfname(queryres, count));
(m_fdata *) (mf + count)->field.table = tfrom;
(m_fdata *) (mf + count)->field.type = CHAR_TYPE;
(m_fdata *) (mf + count)->field.length = PQfsize(queryres, count);
(m_fdata *) (mf + count)->next = (m_fdata *) (mf + count + 1);
}
(m_fdata *) (mf + count - 1)->next = NULL;
md = calloc(PQntuples(queryres), sizeof(m_data));
for (count = 0; count < PQntuples(queryres); count++)
{
m_row rows = calloc(PQnfields(queryres) * sizeof(m_row) + 1, 1);
int c;
for (c = 0; c < PQnfields(queryres); c++)
rows[c] = strdup(PQgetvalue(queryres, count, c));
(m_data *) (md + count)->data = rows;
(m_data *) (md + count)->width = PQnfields(queryres);
(m_data *) (md + count)->next = (m_data *) (md + count + 1);
}
(m_data *) (md + count - 1)->next = NULL;
mr->queryData = mr->cursor = md;
mr->fieldCursor = mr->fieldData = mf;
return mr;
}
else
return NULL;
}
time_t
msqlDateToUnixTime(char *a)
{
}
time_t
msqlTimeToUnixTime(char *b)
{
}
char *
msql_tmpnam(void)
{
return tmpnam("/tmp/msql.XXXXXX");
}
int
msqlLoadConfigFile(char *a)
{
}

View File

@ -1,8 +0,0 @@
This directory contains tools to create a mapping table from MAC
addresses (e.g., Ethernet hardware addresses) to human-readable
manufacturer strings. The `createoui' script builds the table
structure, `updateoui' obtains the current official mapping table
from the web site of the IEEE, converts it, and stores it in the
database, `dropoui' removes everything. Use the --help option to
get more usage information from the respective script. All three
use the psql program; any extra arguments will be passed to psql.

View File

@ -1,55 +0,0 @@
#! /bin/sh
# $PostgreSQL: pgsql/contrib/mac/createoui,v 1.3 2006/03/11 04:38:30 momjian Exp $
# Utility to create manufacturer's oui table
# OUI is "Organizationally Unique Identifier" assigned by IEEE.
# There are currently three duplicate listings, so we can not enforce
# uniqueness in the OUI field.
# - thomas 2000-08-21
args=
update=0
while [ $# -gt 0 ]
do
case "$1" in
--update)
update=1
;;
--noupdate)
update=0
;;
--help)
echo "Usage: $0 --[no]update dbname"
exit
;;
*)
args="$args $1"
;;
esac
shift
done
psql -e $args <<EOF
-- Table containing OUI portions of MAC address and manufacturer's name
create table macoui (
addr macaddr not null,
name text not null
);
-- Create an index to help lookups
create index macoui_idx on macoui (addr);
-- Function to return manufacturer's name given MAC address
create function manuf (macaddr)
returns text as '
select name from macoui m where trunc(\$1) = m.addr;
' language SQL;
EOF
if [ $update -gt 0 ]; then
updateoui $args
fi
exit

View File

@ -1,27 +0,0 @@
#! /bin/sh
# Utility to remove manufacturer's oui table
# $PostgreSQL: pgsql/contrib/mac/dropoui,v 1.2 2006/03/11 04:38:30 momjian Exp $
args=
while [ $# -gt 0 ]
do
case "$1" in
--help)
echo "Usage: $0 dbname"
exit
;;
*)
args="$args $1"
;;
esac
shift
done
psql $args <<EOF
drop function manuf(macaddr);
drop table macoui;
EOF
exit

View File

@ -1,53 +0,0 @@
# $PostgreSQL: pgsql/contrib/mac/ouiparse.awk,v 1.3 2003/11/29 22:39:24 pgsql Exp $
#
# ouiparse.awk
# Author: Lawrence E. Rosenman <ler@lerctr.org>
# Original Date: 30 July 2000 (in this form).
# This AWK script takes the IEEE's oui.txt file and creates insert
# statements to populate a SQL table with the following attributes:
# create table oui (
# oui macaddr primary key,
# manufacturer text);
# the table name is set by setting the AWK variable TABLE
#
# we translate the character apostrophe (') to double apostrophe ('') inside
# the company name to avoid SQL errors.
#
BEGIN {
TABLE="macoui";
printf "DELETE FROM %s;",TABLE;
printf "BEGIN TRANSACTION;";
nrec=0;
}
END {
# if (nrec > 0)
printf "COMMIT TRANSACTION;";
}
# match ONLY lines that begin with 2 hex numbers, -, and another hex number
/^[0-9a-fA-F][0-9a-fA-F]-[0-9a-fA-F]/ {
# if (nrec >= 100) {
# printf "COMMIT TRANSACTION;";
# printf "BEGIN TRANSACTION;";
# nrec=0;
# } else {
# nrec++;
# }
# Get the OUI
OUI=$1;
# Skip the (hex) tag to get to Company Name
Company=$3;
# make the OUI look like a macaddr
gsub("-",":",OUI);
OUI=OUI ":00:00:00"
# Pick up the rest of the company name
for (i=4;i<=NF;i++)
Company=Company " " $i;
# Modify any apostrophes (') to avoid grief below.
gsub("'","''",Company);
# Print out for the SQL table insert
printf "INSERT INTO %s (addr, name) VALUES (trunc(macaddr \'%s\'),\'%s\');\n",
TABLE,OUI,Company;
}

View File

@ -1,34 +0,0 @@
#! /bin/sh
# Utility to create manufacturer's OUI table
args=
refresh=0
while [ $# -gt 0 ]
do
case "$1" in
--refresh|--fetch|-r)
refresh=1
;;
--norefresh|--nofetch)
refresh=0
;;
--help)
echo "Usage: $0 --[no]refresh dbname"
exit
;;
*)
args="$args $1"
;;
esac
shift
done
if [ $refresh -gt 0 ]; then
[ -e oui.txt ] && rm -rf oui.txt
wget -nd 'http://standards.ieee.org/regauth/oui/oui.txt'
fi
awk -f ouiparse.awk < oui.txt | psql -e $args
exit

View File

@ -1,13 +0,0 @@
# $PostgreSQL: pgsql/contrib/tips/Makefile,v 1.8 2005/09/27 17:13:10 tgl Exp $
DOCS = README.apachelog
ifdef USE_PGXS
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
else
subdir = contrib/tips
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif

View File

@ -1,91 +0,0 @@
HOW TO get Apache to log to PostgreSQL
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Note: contain of files 'httpconf.txt' and 'apachelog.sql' are below this
text.
First, this is intended mostly as a starting point, an example of how to do it.
The file 'httpconf.txt' is commented and contains two example lines to make
this work, a custom log format, and a line that sends the log data to psql.
I think that the comments in this file should be sufficient.
The file 'apachelog.sql' is a little SQL to create the table and grant
permissions to it.
You must:
1. Already have 'nobody' (or what ever your web server runs as) as a valid
PostgreSQL user.
2. Create the database to hold the log, (example 'createdb www_log')
3. Edit the file 'apachelog.sql' and change the name of the table to what
ever you used in step 2. ALSO if need be, change the name 'nobody' in
the grant statement.
4. As an appropriate user (postgres is ok), do 'psql www_log < apachelog.sql'.
This should have created the table and granted access to it.
5. SAVE A COPY OF YOUR httpd.conf !!!
6. Edit httpd.conf, add the two lines in the example file as appropriate,
IN THE ORDER IN WHICH THEY APPEAR. This is simple for a single server,
but a little more complex for virtual hosts, but if you set up virtual
hosts, then you should know were to put these lines.
7. Down and restart your httpd. I do it on Red Hat 4.1 like this:
/etc/rc.d/init.d/httpd.init stop
then
/etc/rc.d/init.d/httpd.init start
OR I understand you can send it a signal 16 like 'kill -16 <pid>' and do it.
8. I should be working, query the web server about 30 or more times then look
in the db and see what you have, if nothing then query the web server
30 or 50 more time and then check. If still nothing, look in the server's
error log to see what is going on. But you should have data.
NOTES:
The log data is cached some where, and so will not appear INSTANTLY in the
database! I found that it took around 30 queries of the web server, then
many rows are written to the db at once.
ALSO, I leave it up to you to create any indexes on the table that you want.
The error log can (*I think*) also be sent to PostgreSQL in the same fashion.
At some point in the future, I will be writing some PHP to interface to this
and generate statistical type reports, so check my site once and a while if
you are interested it this.
Terry Mackintosh <terry@terrym.com>
http://www.terrym.com
Have fun ... and remember, this is mostly just intended as a stating point,
not as a finished idea.
--- apachelog.sql : ---
drop table access;
CREATE TABLE access (host char(200), ident char(200), authuser char(200), accdate timestamp, request char(500), ttime int2, status int2, bytes int4) archive = none;
grant all on access to nobody;
--- httpconf.txt: ---
# This is mostly the same as the default, except for no square brakets around
# the time or the extra timezone info, also added the download time, 3rd from
# the end, number of seconds.
LogFormat "insert into access values ( '%h', '%l', '%u', '%{%d/%b/%Y:%H:%M:%S}t', '%r', %T, %s, %b );"
# The above format ALMOST eleminates the need to use sed, except that I noticed
# that when a frameset page is called, then the bytes transfered is '-', which
# will choke the insert, so replaced it with '-1'.
TransferLog '| su -c "sed \"s/, - );$/, -1 );/\" | /usr/local/pgsql/bin/psql www_log" nobody'

View File

@ -1,16 +0,0 @@
# $PostgreSQL: pgsql/contrib/userlock/Makefile,v 1.20 2006/02/27 12:54:39 petere Exp $
MODULES = user_locks
DATA_built = user_locks.sql
DATA = uninstall_user_locks.sql
DOCS = README.user_locks
ifdef USE_PGXS
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
else
subdir = contrib/userlock
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif

View File

@ -1,56 +0,0 @@
User locks, by Massimo Dal Zotto <dz@cs.unitn.it>
Copyright (C) 1999, Massimo Dal Zotto <dz@cs.unitn.it>
This software is distributed under the GNU General Public License
either version 2, or (at your option) any later version.
This loadable module provides support for user-level long-term cooperative
locks. For example one can write:
select some_fields, user_write_lock_oid(oid) from table where id='key';
Now if the returned user_write_lock_oid field is 1 you have acquired an
user lock on the oid of the selected tuple and can now do some long operation
on it, like let the data being edited by the user.
If it is 0 it means that the lock has been already acquired by some other
process and you should not use that item until the other has finished.
Note that in this case the query returns 0 immediately without waiting on
the lock. This is good if the lock is held for long time.
After you have finished your work on that item you can do:
update table set some_fields where id='key';
select user_write_unlock_oid(oid) from table where id='key';
You can also ignore the failure and go ahead but this could produce conflicts
or inconsistent data in your application. User locks require a cooperative
behavior between users. User locks don't interfere with the normal locks
used by Postgres for transaction processing.
This could also be done by setting a flag in the record itself but in
this case you have the overhead of the updates to the records and there
could be some locks not released if the backend or the application crashes
before resetting the lock flag.
It could also be done with a begin/end block but in this case the entire
table would be locked by Postgres and it is not acceptable to do this for
a long period because other transactions would block completely.
The generic user locks use two values, group and id, to identify a lock.
Each of these are 32-bit integers.
The oid user lock functions, which take only an OID as argument, store the
OID as "id" with a group equal to 0.
The meaning of group and id is defined by the application. The user
lock code just takes two numbers and tells you if the corresponding
entity has been successfully locked. What this means is up to you.
My suggestion is that you use the group to identify an area of your
application and the id to identify an object in this area.
In all cases, user locks are local to individual databases within an
installation.
Note also that a process can acquire more than one lock on the same entity
and it must release the lock the corresponding number of times. This can
be done by calling the unlock function until it returns 0.

View File

@ -1,23 +0,0 @@
SET search_path = public;
DROP FUNCTION user_unlock_all();
DROP FUNCTION user_write_unlock_oid(int4);
DROP FUNCTION user_write_lock_oid(int4);
DROP FUNCTION user_write_unlock_oid(oid);
DROP FUNCTION user_write_lock_oid(oid);
DROP FUNCTION user_write_unlock(int4,oid);
DROP FUNCTION user_write_lock(int4,oid);
DROP FUNCTION user_write_unlock(int4,int4);
DROP FUNCTION user_write_lock(int4,int4);
DROP FUNCTION user_unlock(int4,int4,int4);
DROP FUNCTION user_lock(int4,int4,int4);

View File

@ -1,82 +0,0 @@
/*
* user_locks.c --
*
* This loadable module provides support for user-level long-term
* cooperative locks.
*
* Copyright (C) 1999, Massimo Dal Zotto <dz@cs.unitn.it>
*
* This software is distributed under the GNU General Public License
* either version 2, or (at your option) any later version.
*/
#include "postgres.h"
#include "miscadmin.h"
#include "storage/lmgr.h"
#include "storage/proc.h"
#include "user_locks.h"
PG_MODULE_MAGIC;
#define SET_LOCKTAG_USERLOCK(locktag,id1,id2) \
((locktag).locktag_field1 = MyDatabaseId, \
(locktag).locktag_field2 = (id1), \
(locktag).locktag_field3 = (id2), \
(locktag).locktag_field4 = 0, \
(locktag).locktag_type = LOCKTAG_USERLOCK, \
(locktag).locktag_lockmethodid = USER_LOCKMETHOD)
int
user_lock(uint32 id1, uint32 id2, LOCKMODE lockmode)
{
LOCKTAG tag;
SET_LOCKTAG_USERLOCK(tag, id1, id2);
return (LockAcquire(&tag, lockmode, true, true) != LOCKACQUIRE_NOT_AVAIL);
}
int
user_unlock(uint32 id1, uint32 id2, LOCKMODE lockmode)
{
LOCKTAG tag;
SET_LOCKTAG_USERLOCK(tag, id1, id2);
return LockRelease(&tag, lockmode, true);
}
int
user_write_lock(uint32 id1, uint32 id2)
{
return user_lock(id1, id2, ExclusiveLock);
}
int
user_write_unlock(uint32 id1, uint32 id2)
{
return user_unlock(id1, id2, ExclusiveLock);
}
int
user_write_lock_oid(Oid oid)
{
return user_lock(0, oid, ExclusiveLock);
}
int
user_write_unlock_oid(Oid oid)
{
return user_unlock(0, oid, ExclusiveLock);
}
int
user_unlock_all(void)
{
LockReleaseAll(USER_LOCKMETHOD, true);
return true;
}

View File

@ -1,14 +0,0 @@
#ifndef USER_LOCKS_H
#define USER_LOCKS_H
#include "storage/lock.h"
extern int user_lock(uint32 id1, uint32 id2, LOCKMODE lockmode);
extern int user_unlock(uint32 id1, uint32 id2, LOCKMODE lockmode);
extern int user_write_lock(uint32 id1, uint32 id2);
extern int user_write_unlock(uint32 id1, uint32 id2);
extern int user_write_lock_oid(Oid oid);
extern int user_write_unlock_oid(Oid oid);
extern int user_unlock_all(void);
#endif

View File

@ -1,88 +0,0 @@
-- user_locks.sql --
--
-- SQL code to define the user locks functions.
--
-- Copyright (c) 1998, Massimo Dal Zotto <dz@cs.unitn.it>
--
-- This file is distributed under the GNU General Public License
-- either version 2, or (at your option) any later version.
-- select user_lock(group,id,mode);
--
-- Adjust this setting to control where the objects get created.
SET search_path = public;
CREATE OR REPLACE FUNCTION user_lock(int4,int4,int4)
RETURNS int4
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT;
-- SELECT user_unlock(group,id,mode);
--
CREATE OR REPLACE FUNCTION user_unlock(int4,int4,int4)
RETURNS int4
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT;
-- SELECT user_write_lock(group,id);
--
CREATE OR REPLACE FUNCTION user_write_lock(int4,int4)
RETURNS int4
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT;
-- SELECT user_write_unlock(group,id);
--
CREATE OR REPLACE FUNCTION user_write_unlock(int4,int4)
RETURNS int4
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT;
-- SELECT user_write_lock(group,oid);
--
CREATE OR REPLACE FUNCTION user_write_lock(int4,oid)
RETURNS int4
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT;
-- SELECT user_write_unlock(group,oid);
--
CREATE OR REPLACE FUNCTION user_write_unlock(int4,oid)
RETURNS int4
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT;
-- SELECT user_write_lock_oid(oid);
--
CREATE OR REPLACE FUNCTION user_write_lock_oid(oid)
RETURNS int4
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT;
-- SELECT user_write_unlock_oid(oid);
--
CREATE OR REPLACE FUNCTION user_write_unlock_oid(oid)
RETURNS int4
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT;
-- SELECT user_write_lock_oid(int4);
--
CREATE OR REPLACE FUNCTION user_write_lock_oid(int4)
RETURNS int4
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT;
-- SELECT user_write_unlock_oid(int4);
--
CREATE OR REPLACE FUNCTION user_write_unlock_oid(int4)
RETURNS int4
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT;
-- SELECT user_unlock_all();
--
CREATE OR REPLACE FUNCTION user_unlock_all()
RETURNS int4
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT;