Markup and spell-check run over Programmer's Guide (rather incomplete still).

This commit is contained in:
Peter Eisentraut 2001-09-10 21:58:47 +00:00
parent 51ba1c5a8c
commit a457d33516
21 changed files with 845 additions and 806 deletions

View File

@ -66,7 +66,7 @@ From that point on, the frontend process and the backend
in mind,
because the files that can be accessed on a client
machine may not be accessible (or may only be accessed
using a different filename) on the database server
using a different file name) on the database server
machine.
You should also be aware that the <Application>postmaster</Application> and
postgres servers run with the user-id of the <ProductName>Postgres</ProductName>

View File

@ -1,5 +1,5 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/ecpg.sgml,v 1.21 2001/09/06 00:23:42 momjian Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/ecpg.sgml,v 1.22 2001/09/10 21:58:46 petere Exp $
-->
<chapter id="ecpg">
@ -153,12 +153,12 @@ $Header: /cvsroot/pgsql/doc/src/sgml/ecpg.sgml,v 1.21 2001/09/06 00:23:42 momjia
<para>
To detect errors from the <productname>Postgres</productname>
server, include a line like:
<programlisting>
<programlisting>
exec sql include sqlca;
</programlisting>
</programlisting>
in the include section of your file. This will define a struct and
a variable with the name <parameter>sqlca</parameter> as follows:
<programlisting>
<programlisting>
struct sqlca
{
char sqlcaid[8];
@ -191,11 +191,11 @@ struct sqlca
/* 7: empty */
char sqlext[8];
} sqlca;
</programlisting>
</programlisting>
</para>
<para>
If an error occured in the last <acronym>SQL</acronym> statement.
If an error occurred in the last <acronym>SQL</acronym> statement.
<parameter>sqlca.sqlcode</parameter> will be non-zero. If
<parameter>sqlca.sqlcode</parameter> is less that 0, this is a
serious error, like the database definition does not match the
@ -214,7 +214,7 @@ struct sqlca
<variablelist>
<varlistentry>
<term>-12, Out of memory in line %d.</term>
<term><computeroutput>-12, Out of memory in line %d.</computeroutput></term>
<listitem>
<para>
Should not normally occur. This indicates your virtual memory is
@ -224,7 +224,7 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-200, Unsupported type %s on line %d.</term>
<term><computeroutput>-200, Unsupported type %s on line %d.</computeroutput></term>
<listitem>
<para>
Should not normally occur. This indicates the preprocessor has
@ -236,7 +236,7 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-201, Too many arguments line %d.</term>
<term><computeroutput>-201, Too many arguments line %d.</computeroutput></term>
<listitem>
<para>
This means that <productname>Postgres</productname> has
@ -248,7 +248,7 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-202, Too few arguments line %d.</term>
<term><computeroutput>-202, Too few arguments line %d.</computeroutput></term>
<listitem>
<para>
This means that <productname>Postgres</productname> has
@ -260,7 +260,7 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-203, Too many matches line %d.</term>
<term><computeroutput>-203, Too many matches line %d.</computeroutput></term>
<listitem>
<para>
This means the query has returned several rows but the
@ -271,7 +271,7 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-204, Not correctly formatted int type: %s line %d.</term>
<term><computeroutput>-204, Not correctly formatted int type: %s line %d.</computeroutput></term>
<listitem>
<para>
This means the host variable is of type <type>int</type> and
@ -284,7 +284,7 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-205, Not correctly formatted unsigned type: %s line %d.</term>
<term><computeroutput>-205, Not correctly formatted unsigned type: %s line %d.</computeroutput></term>
<listitem>
<para>
This means the host variable is of type <type>unsigned
@ -298,7 +298,7 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-206, Not correctly formatted floating point type: %s line %d.</term>
<term><computeroutput>-206, Not correctly formatted floating point type: %s line %d.</computeroutput></term>
<listitem>
<para>
This means the host variable is of type <type>float</type> and
@ -311,7 +311,7 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-207, Unable to convert %s to bool on line %d.</term>
<term><computeroutput>-207, Unable to convert %s to bool on line %d.</computeroutput></term>
<listitem>
<para>
This means the host variable is of type <type>bool</type> and
@ -322,17 +322,17 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-208, Empty query line %d.</term>
<term><computeroutput>-208, Empty query line %d.</computeroutput></term>
<listitem>
<para>
<productname>Postgres</productname> returned PGRES_EMPTY_QUERY, probably
<productname>Postgres</productname> returned <symbol>PGRES_EMPTY_QUERY</symbol>, probably
because the query indeed was empty.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>-220, No such connection %s in line %d.</term>
<term><computeroutput>-220, No such connection %s in line %d.</computeroutput></term>
<listitem>
<para>
The program tried to access a connection that does not exist.
@ -341,7 +341,7 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-221, Not connected in line %d.</term>
<term><computeroutput>-221, Not connected in line %d.</computeroutput></term>
<listitem>
<para>
The program tried to access a connection that does exist but is
@ -351,7 +351,7 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-230, Invalid statement name %s in line %d.</term>
<term><computeroutput>-230, Invalid statement name %s in line %d.</computeroutput></term>
<listitem>
<para>
The statement you are trying to use has not been prepared.
@ -360,7 +360,7 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-400, Postgres error: %s line %d.</term>
<term><computeroutput>-400, Postgres error: %s line %d.</computeroutput></term>
<listitem>
<para>
Some <productname>Postgres</productname> error.
@ -371,7 +371,7 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-401, Error in transaction processing line %d. </term>
<term><computeroutput>-401, Error in transaction processing line %d.</computeroutput></term>
<listitem>
<para>
<productname>Postgres</productname> signaled that we cannot start,
@ -381,7 +381,7 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>-402, connect: could not open database %s.</term>
<term><computeroutput>-402, connect: could not open database %s.</computeroutput></term>
<listitem>
<para>
The connect to the database did not work.
@ -390,10 +390,10 @@ struct sqlca
</varlistentry>
<varlistentry>
<term>100, Data not found line %d.</term>
<term><computeroutput>100, Data not found line %d.</computeroutput></term>
<listitem>
<para>
This is a "normal" error that tells you that what you are quering cannot
This is a "normal" error that tells you that what you are querying cannot
be found or you are at the end of the cursor.
</para>
</listitem>
@ -419,7 +419,7 @@ struct sqlca
<term>Oracle's single tasking</term>
<listitem>
<para>
Oracle version 7.0 on AIX 3 uses OS-supported locks in shared
Oracle version 7.0 on <systemitem class="osname">AIX</> 3 uses OS-supported locks in shared
memory that allow an application designer to link an application
in a "single tasking" way. Instead of starting one client
process per application process, both the database part and the
@ -526,21 +526,21 @@ struct sqlca
The following statements are not implemented thus far:
<variablelist>
<varlistentry>
<term> exec sql allocate</term>
<term><literal>exec sql allocate</></term>
<listitem>
<para>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term> exec sql deallocate</term>
<term><literal>exec sql deallocate</></term>
<listitem>
<para>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term> SQLSTATE</term>
<term><literal>SQLSTATE</></term>
<listitem>
<para>
</para>
@ -565,10 +565,10 @@ exec sql insert select from statement
</varlistentry>
<varlistentry>
<term>sqlwarn[6]</term>
<term><literal>sqlwarn[6]</literal></term>
<listitem>
<para>
sqlwarn[6] should be <literal>W</> if the <command>PRECISION</>
<literal>sqlwarn[6]</literal> should be <literal>W</> if the <command>PRECISION</>
or <command>SCALE</> value specified in a <command>SET
DESCRIPTOR</> statement was ignored.
</para>
@ -582,7 +582,7 @@ exec sql insert select from statement
<title>The Preprocessor</title>
<para>
The first four lines written by ecpg to the output are fixed lines.
The first four lines written by <command>ecpg</command> to the output are fixed lines.
Two are comments and two are include lines necessary to interface
to the library.
</para>
@ -629,7 +629,7 @@ exec sql end declare section;
</para>
<para>
The special types VARCHAR and VARCHAR2 are converted into a named struct
The special types <type>VARCHAR</type> and <type>VARCHAR2</type> are converted into a named struct
for every variable. A declaration like:
<programlisting>
VARCHAR var[180];
@ -745,7 +745,7 @@ exec sql connect to <replaceable>connection target</replaceable>;
</para>
<para>
Finally, the userid and password may be a constant text, a
Finally, the <replaceable>userid</replaceable> and <replaceable>password</replaceable> may be a constant text, a
character variable, or a character string.
</para>
</listitem>
@ -853,7 +853,7 @@ exec sql rollback;
<simplelist>
<member>The type as a special symbol.</member>
<member>A pointer to the value or a pointer to the pointer.</member>
<member>The size of the variable if it is a char or varchar.</member>
<member>The size of the variable if it is a <type>char</type> or <type>varchar</type>.</member>
<member>The number of elements in the array (for array fetches).</member>
<member>The offset to the next element in the array (for array fetches).</member>
<member>The type of the indicator variable as a special symbol.</member>
@ -876,7 +876,7 @@ exec sql rollback;
<para>
Here is a complete example describing the output of the preprocessor of a
file foo.pgc:
file <filename>foo.pgc</filename>:
<programlisting>
exec sql begin declare section;
int index;
@ -919,7 +919,7 @@ ECPGdo(__LINE__, NULL, "select res from mytable where index = ? ",
The most important function in the library is
<function>ECPGdo</function>. It takes a variable number of
arguments. Hopefully there are no computers that limit the
number of variables that can be accepted by a varargs() function. This
number of variables that can be accepted by a <function>varargs()</function> function. This
can easily add up to 50 or so arguments.
</para>
@ -960,7 +960,7 @@ ECPGdo(__LINE__, NULL, "select res from mytable where index = ? ",
</varlistentry>
<varlistentry>
<term>ECPGt_EOIT</term>
<term><literal>ECPGt_EOIT</literal></term>
<listitem>
<para>
An enum telling that there are no more input variables.

View File

@ -1,5 +1,5 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/extend.sgml,v 1.10 2001/05/17 21:50:15 petere Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/extend.sgml,v 1.11 2001/09/10 21:58:46 petere Exp $
-->
<chapter id="extend">
@ -236,7 +236,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/extend.sgml,v 1.10 2001/05/17 21:50:15 pete
<note>
<para>
We use the words <firstterm>procedure</firstterm>
and <firstterm>function</firstterm> more or less interchangably.
and <firstterm>function</firstterm> more or less interchangeably.
</para>
</note>

View File

@ -1,5 +1,5 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/Attic/indexcost.sgml,v 2.7 2001/05/09 23:13:34 tgl Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/Attic/indexcost.sgml,v 2.8 2001/09/10 21:58:46 petere Exp $
-->
<chapter id="indexcost">
@ -154,7 +154,7 @@ amcostestimate (Query *root,
<para>
The index access costs should be computed in the units used by
src/backend/optimizer/path/costsize.c: a sequential disk block fetch
<filename>src/backend/optimizer/path/costsize.c</filename>: a sequential disk block fetch
has cost 1.0, a nonsequential fetch has cost random_page_cost, and
the cost of processing one index tuple should usually be taken as
cpu_index_tuple_cost (which is a user-adjustable optimizer parameter).

View File

@ -1,5 +1,5 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/Attic/jdbc.sgml,v 1.21 2001/08/16 16:24:15 momjian Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/Attic/jdbc.sgml,v 1.22 2001/09/10 21:58:46 petere Exp $
-->
<chapter id="jdbc">
@ -514,7 +514,7 @@ mycon.setAutoCommit(false);
There are two methods of using Large Objects. The first is the
standard <acronym>JDBC</acronym> way, and is documented here. The
other, uses <productname>PostgreSQL</productname> extensions to
the <acronym>API</acronym>, which presents the libpq large object
the <acronym>API</acronym>, which presents the <application>libpq</application> large object
<acronym>API</acronym> to Java, providing even better access to
large objects than the standard. Internally, the driver uses the
extension to provide large object support.
@ -674,7 +674,7 @@ import org.postgresql.fastpath.*;
...
Fastpath fp = ((org.postgresql.Connection)myconn).getFastpathAPI();
</programlisting>
where myconn is an open Connection to <productname>PostgreSQL</productname>.
where <varname>myconn</> is an open <classname>Connection</> to <productname>PostgreSQL</productname>.
</para>
<formalpara>
@ -709,21 +709,21 @@ import org.postgresql.largeobject.*;
...
LargeObjectManager lo = ((org.postgresql.Connection)myconn).getLargeObjectAPI();
</programlisting>
where myconn is an open Connection to
where <varname>myconn</> is an open <classname>Connection</> to
<productname>PostgreSQL</productname>.
</para>
<formalpara>
<title>Returns:</title>
<para>
LargeObject object that implements the <acronym>API</acronym>
<classname>LargeObject</classname> object that implements the <acronym>API</acronym>
</para>
</formalpara>
<formalpara>
<title>Throws:</title>
<para>
SQLException by LargeObject when initializing for first time
<classname>SQLException</classname> by <classname>LargeObject</classname> when initializing for first time
</para>
</formalpara>
</listitem>
@ -735,9 +735,9 @@ public void addDataType(String type, String name)
</synopsis>
This allows client code to add a handler for one of
PostgreSQL's more unique data types. Normally, a data type not
known by the driver is returned by ResultSet.getObject() as a
PGobject instance. This method allows you to write a class
that extends PGobject, and tell the driver the type name, and
known by the driver is returned by <literal>ResultSet.getObject()</literal> as a
<classname>PGobject</> instance. This method allows you to write a class
that extends <classname>PGobject</>, and tell the driver the type name, and
class name to use. The down side to this, is that you must
call this method each time a connection is made.
</para>
@ -749,7 +749,7 @@ public void addDataType(String type, String name)
((org.postgresql.Connection)myconn).addDataType("mytype","my.class.name");
...
</programlisting>
where myconn is an open Connection to
where <varname>myconn</varname> is an open <classname>Connection</> to
<productname>PostgreSQL</productname>. The handling class must
extend <classname>org.postgresql.util.PGobject</classname>.
</para>
@ -772,7 +772,7 @@ java.lang.Object
<para>
<classname>Fastpath</classname> is an <acronym>API</acronym> that
exists within the libpq C interface, and allows a client machine
exists within the <application>libpq</application> C interface, and allows a client machine
to execute a function on the database backend. Most client code
will not need to use this method, but it is provided because the
Large Object <acronym>API</acronym> uses it.
@ -825,11 +825,11 @@ public Object fastpath(int fnid,
<formalpara>
<title>Parameters:</title>
<para>
fnid - Function id
resulttype - True if the result is an integer, false
<parameter>fnid</> - Function id
<parameter>resulttype</> - True if the result is an integer, false
for
other results
args - FastpathArguments to pass to fastpath
<parameter>args</> - <classname>FastpathArguments</classname> to pass to fastpath
</para>
</formalpara>
@ -855,7 +855,7 @@ public Object fastpath(String name,
<note>
<para>
The mapping for the procedure name to function id needs to
exist, usually to an earlier call to addfunction(). This is
exist, usually to an earlier call to <function>addfunction()</function>. This is
the preferred method to call, as function id's can/may change
between versions of the backend. For an example of how this
works, refer to org.postgresql.LargeObject
@ -865,11 +865,11 @@ public Object fastpath(String name,
<formalpara>
<title>Parameters:</title>
<para>
name - Function name
resulttype - True if the result is an integer, false
<parameter>name</> - Function name
<parameter>resulttype</> - True if the result is an integer, false
for
other results
args - FastpathArguments to pass to fastpath
<parameter>args</> - <classname>FastpathArguments</classname> to pass to fastpath
</para>
</formalpara>
@ -899,8 +899,8 @@ public int getInteger(String name,
<formalpara>
<title>Parameters:</title>
<para>
name - Function name
args - Function arguments
<parameter>name</parameter> - Function name
<parameter>args</parameter> - Function arguments
</para>
</formalpara>
@ -912,7 +912,7 @@ public int getInteger(String name,
<formalpara>
<title>Throws:</title>
<para>
SQLException if a database-access error occurs or no result
<classname>SQLException</classname> if a database-access error occurs or no result
</para>
</formalpara>
</listitem>
@ -930,8 +930,8 @@ public byte[] getData(String name,
<formalpara>
<title>Parameters:</title>
<para>
name - Function name
args - Function arguments
<parameter>name</parameter> - Function name
<parameter>args</parameter> - Function arguments
</para>
</formalpara>
@ -943,7 +943,7 @@ public byte[] getData(String name,
<formalpara>
<title>Throws:</title>
<para>
SQLException if a database-access error occurs or no result
<classname>SQLException</classname> if a database-access error occurs or no result
</para>
</formalpara>
</listitem>
@ -984,7 +984,7 @@ public void addFunctions(ResultSet rs) throws SQLException
<para>
PostgreSQL stores the function id's and their corresponding
names in the pg_proc table. To speed things up locally,
names in the <classname>pg_proc</> table. To speed things up locally,
instead of querying each function from that table when
required, a <classname>Hashtable</classname> is used. Also, only the function's
required are entered into this table, keeping connection
@ -1002,7 +1002,7 @@ public void addFunctions(ResultSet rs) throws SQLException
<para>
Do not think that manually converting them to the oid's will
work. Okay, they will for now, but they can change during
work. OK, they will for now, but they can change during
development (there was some discussion about this for V7.0),
so this is implemented to prevent any unwarranted headaches
in the future.
@ -1023,8 +1023,8 @@ public int getID(String name) throws SQLException
</synopsis>
<para>
This returns the function id associated by its name If
addFunction() or addFunctions() have not been called for this
name, then an SQLException is thrown.
<function>addFunction()</function> or <function>addFunctions()</function> have not been called for this
name, then an <classname>SQLException</classname> is thrown.
</para>
</listitem>
</itemizedlist>
@ -1112,21 +1112,21 @@ public FastpathArg(byte buf[],
<para>
<variablelist>
<varlistentry>
<term>buf</term>
<term><parameter>buf</></term>
<listitem>
<simpara>source array</simpara>
</listitem>
</varlistentry>
<varlistentry>
<term>off</term>
<term><parameter>off</parameter></term>
<listitem>
<simpara>offset within array</simpara>
</listitem>
</varlistentry>
<varlistentry>
<term>len</term>
<term><parameter>len</parameter></term>
<listitem>
<simpara>length of data to include</simpara>
</listitem>
@ -1880,7 +1880,7 @@ Methods
</para>
<para>
The org.postgresql.largeobject package provides to Java the libpq
The org.postgresql.largeobject package provides to Java the <application>libpq</application>
C interface's large object <acronym>API</acronym>. It consists of
two classes, <classname>LargeObjectManager</classname>, which deals with creating,
opening and deleting large objects, and <classname>LargeObject</classname> which deals

View File

@ -1,9 +1,9 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpgeasy.sgml,v 2.5 2001/04/20 15:52:33 thomas Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpgeasy.sgml,v 2.6 2001/09/10 21:58:46 petere Exp $
-->
<chapter id="pgeasy-chapter">
<title id="pgeasy">libpgeasy - Simplified C Library</title>
<title id="pgeasy"><application>libpgeasy</application> - Simplified C Library</title>
<note>
<title>Author</title>
@ -23,74 +23,74 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpgeasy.sgml,v 2.5 2001/04/20 15:52
<para>
It consists of set of simplified C functions that encapsulate the
functionality of libpq.
functionality of <application>libpq</application>.
The functions are:
<itemizedlist>
<listitem>
<para>
<synopsis>
PGresult *doquery(char *query);
</para>
</synopsis>
</listitem>
<listitem>
<para>
<synopsis>
PGconn *connectdb(char *options);
</para>
</synopsis>
</listitem>
<listitem>
<para>
<synopsis>
void disconnectdb();
</para>
</synopsis>
</listitem>
<listitem>
<para>
<synopsis>
int fetch(void *param,...);
</para>
</synopsis>
</listitem>
<listitem>
<para>
<synopsis>
int fetchwithnulls(void *param,...);
</para>
</synopsis>
</listitem>
<listitem>
<para>
<synopsis>
void reset_fetch();
</para>
</synopsis>
</listitem>
<listitem>
<para>
<synopsis>
void on_error_continue();
</para>
</synopsis>
</listitem>
<listitem>
<para>
<synopsis>
void on_error_stop();
</para>
</synopsis>
</listitem>
<listitem>
<para>
<synopsis>
PGresult *get_result();
</para>
</synopsis>
</listitem>
<listitem>
<para>
<synopsis>
void set_result(PGresult *newres);
</para>
</synopsis>
</listitem>
<listitem>
<para>
<synopsis>
void unset_result(PGresult *oldres);
</para>
</synopsis>
</listitem>
</itemizedlist>
</para>
@ -118,7 +118,7 @@ void unset_result(PGresult *oldres);
<function>fetchwithnulls</function> allows you to retrieve the NULL
status of the field by passing an <literal>int*</literal>
after each result pointer, which returns true or false if the field is null.
You can always use libpq functions on the PGresult pointer returned
You can always use <application>libpq</application> functions on the <structname>PGresult</structname> pointer returned
by <function>doquery</function>.
<function>reset_fetch</function> starts the fetch back at the beginning.
</para>

View File

@ -1,5 +1,5 @@
<chapter id="pgtcl">
<title>pgtcl - Tcl Binding Library</title>
<title><application>pgtcl</application> - Tcl Binding Library</title>
<indexterm zone="pgtcl">
<primary>libpgtcl</primary>
@ -10,10 +10,10 @@
</indexterm>
<Para>
<literal>pgtcl</literal> is a tcl package for front-end programs
<literal>pgtcl</literal> is a Tcl package for front-end programs
to interface with <ProductName>Postgres</ProductName>
backends. It makes most of the functionality of <literal>libpq</literal> available to
tcl scripts.
Tcl scripts.
</Para>
<Para>
@ -35,72 +35,72 @@ This package was originally written by Jolly Chen.
</THEAD>
<TBODY>
<ROW>
<ENTRY>pg_connect</ENTRY>
<ENTRY><function>pg_connect</function></ENTRY>
<ENTRY>opens a connection to the backend server</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_disconnect</ENTRY>
<ENTRY><function>pg_disconnect</function></ENTRY>
<ENTRY>closes a connection</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_conndefaults</ENTRY>
<ENTRY><function>pg_conndefaults</function></ENTRY>
<ENTRY>get connection options and their defaults</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_exec</ENTRY>
<ENTRY><function>pg_exec</function></ENTRY>
<ENTRY>send a query to the backend</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_result</ENTRY>
<ENTRY><function>pg_result</function></ENTRY>
<ENTRY>manipulate the results of a query</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_select</ENTRY>
<ENTRY><function>pg_select</function></ENTRY>
<ENTRY>loop over the result of a SELECT statement</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_listen</ENTRY>
<ENTRY><function>pg_listen</function></ENTRY>
<ENTRY>establish a callback for NOTIFY messages</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_lo_creat</ENTRY>
<ENTRY><function>pg_lo_creat</function></ENTRY>
<ENTRY>create a large object</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_lo_open</ENTRY>
<ENTRY><function>pg_lo_open</function></ENTRY>
<ENTRY>open a large object</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_lo_close</ENTRY>
<ENTRY><function>pg_lo_close</function></ENTRY>
<ENTRY>close a large object</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_lo_read</ENTRY>
<ENTRY><function>pg_lo_read</function></ENTRY>
<ENTRY>read a large object</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_lo_write</ENTRY>
<ENTRY><function>pg_lo_write</function></ENTRY>
<ENTRY>write a large object</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_lo_lseek</ENTRY>
<ENTRY><function>pg_lo_lseek</function></ENTRY>
<ENTRY>seek to a position in a large object</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_lo_tell</ENTRY>
<ENTRY><function>pg_lo_tell</function></ENTRY>
<ENTRY>return the current seek position of a large object</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_lo_unlink</ENTRY>
<ENTRY><function>pg_lo_unlink</function></ENTRY>
<ENTRY>delete a large object</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_lo_import</ENTRY>
<ENTRY><function>pg_lo_import</function></ENTRY>
<ENTRY>import a Unix file into a large object</ENTRY>
</ROW>
<ROW>
<ENTRY>pg_lo_export</ENTRY>
<ENTRY><function>pg_lo_export</function></ENTRY>
<ENTRY>export a large object into a Unix file</ENTRY>
</ROW>
</TBODY>
@ -154,7 +154,7 @@ proc getDBs { {host "localhost"} {port "5432"} } {
</Sect1>
<Sect1 id="libpgtcl-ref">
<Title>pgtcl Command Reference Information</Title>
<Title><application>pgtcl</application> Command Reference Information</Title>
<REFENTRY ID="PGTCL-PGCONNECT">
<REFMETA>
@ -243,7 +243,7 @@ A list of valid options can be found in <filename>libpq</>'s
<OPTIONAL>-tty <REPLACEABLE CLASS="PARAMETER">pqtty</REPLACEABLE></OPTIONAL>
</TERM>
<LISTITEM>
<PARA>Specifies file or tty for optional debug output from backend.
<PARA>Specifies file or <acronym>tty</acronym> for optional debug output from backend.
</PARA>
</LISTITEM>
</VARLISTENTRY>
@ -273,7 +273,7 @@ A list of valid options can be found in <filename>libpq</>'s
<LISTITEM>
<PARA>
If successful, a handle for a database connection is returned.
Handles start with the prefix "pgsql".
Handles start with the prefix <literal>pgsql</literal>.
</PARA>
</LISTITEM>
</VARLISTENTRY>
@ -304,8 +304,9 @@ for info about the available options in the newer syntax.
<REFSECT1 ID="R1-PGTCL-PGCONNECT-2">
<TITLE>Usage
</TITLE>
<PARA>XXX thomas 1997-12-24
</PARA>
<comment>
XXX thomas 1997-12-24
</comment>
</REFSECT1>
</REFENTRY>
@ -433,11 +434,11 @@ The result is a list describing the possible connection options and their
current default values.
Each entry in the list is a sublist of the format:
</Para>
<para>
{optname label dispchar dispsize value}
</Para>
<screen>
{optname label dispchar dispsize value}
</screen>
<Para>
where the optname is usable as an option in
where the <replaceable>optname</> is usable as an option in
<FUNCTION>pg_connect -conninfo</FUNCTION>.
</PARA>
</LISTITEM>
@ -462,7 +463,7 @@ current default value for each option.
<REFSECT1 ID="R1-PGTCL-PGCONNDEFAULTS-2">
<TITLE>Usage
</TITLE>
<PARA>pg_conndefaults
<PARA><literal>pg_conndefaults</>
</PARA>
</REFSECT1>
</REFENTRY>
@ -532,7 +533,7 @@ pg_exec <REPLACEABLE CLASS="PARAMETER">dbHandle</REPLACEABLE> <REPLACEABLE CLASS
</TERM>
<LISTITEM>
<PARA>
A Tcl error will be returned if Pgtcl was unable to obtain a backend
A Tcl error will be returned if <application>pgtcl</application> was unable to obtain a backend
response. Otherwise, a query result object is created and a handle for
it is returned. This handle can be passed to <FUNCTION>pg_result</FUNCTION>
to obtain the results of the query.
@ -620,7 +621,7 @@ Specifies one of several possible options.
<VARIABLELIST>
<VARLISTENTRY>
<TERM>
-status
<option>-status</option>
</TERM>
<LISTITEM>
<PARA>
@ -630,7 +631,7 @@ the status of the result.
</VARLISTENTRY>
<VARLISTENTRY>
<TERM>
-error
<option>-error</option>
</TERM>
<LISTITEM>
<PARA>
@ -640,7 +641,7 @@ the error message, if the status indicates error; otherwise an empty string.
</VARLISTENTRY>
<VARLISTENTRY>
<TERM>
-conn
<option>-conn</option>
</TERM>
<LISTITEM>
<PARA>
@ -650,7 +651,7 @@ the connection that produced the result.
</VARLISTENTRY>
<VARLISTENTRY>
<TERM>
-oid
<option>-oid</option>
</TERM>
<LISTITEM>
<PARA>
@ -661,7 +662,7 @@ inserted tuple; otherwise an empty string.
</VARLISTENTRY>
<VARLISTENTRY>
<TERM>
-numTuples
<option>-numTuples</option>
</TERM>
<LISTITEM>
<PARA>
@ -671,7 +672,7 @@ the number of tuples returned by the query.
</VARLISTENTRY>
<VARLISTENTRY>
<TERM>
-numAttrs
<option>-numAttrs</option>
</TERM>
<LISTITEM>
<PARA>
@ -681,7 +682,7 @@ the number of attributes in each tuple.
</VARLISTENTRY>
<VARLISTENTRY>
<TERM>
-list VarName
<option>-list VarName</option>
</TERM>
<LISTITEM>
<PARA>
@ -691,32 +692,32 @@ assign the results to a list of lists.
</VARLISTENTRY>
<VARLISTENTRY>
<TERM>
-assign arrayName
<option>-assign arrayName</option>
</TERM>
<LISTITEM>
<PARA>
assign the results to an array, using subscripts of the form
(tupno,attributeName).
<literal>(tupno,attributeName)</literal>.
</PARA>
</LISTITEM>
</VARLISTENTRY>
<VARLISTENTRY>
<TERM>
-assignbyidx arrayName ?appendstr?
<option>-assignbyidx arrayName ?appendstr?</option>
</TERM>
<LISTITEM>
<PARA>
assign the results to an array using the first attribute's value and
the remaining attributes' names as keys. If appendstr is given then
the remaining attributes' names as keys. If <parameter>appendstr</> is given then
it is appended to each key. In short, all but the first field of each
tuple are stored into the array, using subscripts of the form
(firstFieldValue,fieldNameAppendStr).
<literal>(firstFieldValue,fieldNameAppendStr)</literal>.
</PARA>
</LISTITEM>
</VARLISTENTRY>
<VARLISTENTRY>
<TERM>
-getTuple tupleNumber
<option>-getTuple tupleNumber</option>
</TERM>
<LISTITEM>
<PARA>
@ -727,18 +728,18 @@ start at zero.
</VARLISTENTRY>
<VARLISTENTRY>
<TERM>
-tupleArray tupleNumber arrayName
<option>-tupleArray tupleNumber arrayName</option>
</TERM>
<LISTITEM>
<PARA>
stores the fields of the tuple in array arrayName, indexed by field names.
stores the fields of the tuple in array <parameter>arrayName</parameter>, indexed by field names.
Tuple numbers start at zero.
</PARA>
</LISTITEM>
</VARLISTENTRY>
<VARLISTENTRY>
<TERM>
-attributes
<option>-attributes</option>
</TERM>
<LISTITEM>
<PARA>
@ -748,17 +749,17 @@ returns a list of the names of the tuple attributes.
</VARLISTENTRY>
<VARLISTENTRY>
<TERM>
-lAttributes
<option>-lAttributes</option>
</TERM>
<LISTITEM>
<PARA>
returns a list of sublists, {name ftype fsize} for each tuple attribute.
returns a list of sublists, <literal>{name ftype fsize}</literal> for each tuple attribute.
</PARA>
</LISTITEM>
</VARLISTENTRY>
<VARLISTENTRY>
<TERM>
-clear
<option>-clear</option>
</TERM>
<LISTITEM>
<PARA>
@ -796,7 +797,7 @@ created by a prior <FUNCTION>pg_exec</FUNCTION>.
You can keep a query result around for as long as you need it, but when
you are done with it, be sure to free it by
executing <FUNCTION>pg_result -clear</FUNCTION>. Otherwise, you have
a memory leak, and Pgtcl will eventually start complaining that you've
a memory leak, and <application>Pgtcl</> will eventually start complaining that you've
created too many query result objects.
</PARA>
</REFSECT1>
@ -1021,9 +1022,9 @@ when a matching notification arrives.
</TITLE>
<PARA><FUNCTION>pg_listen</FUNCTION> creates, changes, or cancels a request
to listen for asynchronous NOTIFY messages from the
<ProductName>Postgres</ProductName> backend. With a callbackCommand
<ProductName>Postgres</ProductName> backend. With a <parameter>callbackCommand</>
parameter, the request is established, or the command string of an already
existing request is replaced. With no callbackCommand parameter, a prior
existing request is replaced. With no <parameter>callbackCommand</> parameter, a prior
request is canceled.
</PARA>
@ -1041,8 +1042,8 @@ the idle loop to be entered.
</Para>
<para>
You should not invoke the SQL statements LISTEN or UNLISTEN directly when
using <FUNCTION>pg_listen</FUNCTION>. Pgtcl takes care of issuing those
You should not invoke the SQL statements <command>LISTEN</command> or <command>UNLISTEN</command> directly when
using <FUNCTION>pg_listen</FUNCTION>. <application>Pgtcl</application> takes care of issuing those
statements for you. But if you want to send a NOTIFY message yourself,
invoke the SQL NOTIFY statement using <FUNCTION>pg_exec</FUNCTION>.
</PARA>
@ -1754,7 +1755,9 @@ pg_lo_unlink <REPLACEABLE CLASS="PARAMETER">conn</REPLACEABLE> <REPLACEABLE CLAS
<LISTITEM>
<PARA>
Identifier for a large object.
XXX Is this the same as objOid in other calls?? - thomas 1998-01-11
<comment>
XXX Is this the same as <parameter>objOid</parameter> in other calls?? - thomas 1998-01-11
</comment>
</PARA>
</LISTITEM>
</VARLISTENTRY>
@ -1850,7 +1853,9 @@ Unix file name.
</TITLE>
<PARA>
None
XXX Does this return a lobjId? Is that the same as the objOid in other calls? thomas - 1998-01-11
<comment>
XXX Does this return a lobjId? Is that the same as the objOid in other calls? thomas - 1998-01-11
</comment>
</PARA>
</REFSECT2>
</REFSYNOPSISDIV>
@ -1919,7 +1924,9 @@ pg_lo_export <REPLACEABLE CLASS="PARAMETER">conn</REPLACEABLE> <REPLACEABLE CLAS
<LISTITEM>
<PARA>
Large object identifier.
XXX Is this the same as the objOid in other calls?? thomas - 1998-01-11
<comment>
XXX Is this the same as the objOid in other calls?? thomas - 1998-01-11
</comment>
</PARA>
</LISTITEM>
</VARLISTENTRY>
@ -1944,7 +1951,9 @@ Unix file name.
</TITLE>
<PARA>
None
XXX Does this return a lobjId? Is that the same as the objOid in other calls? thomas - 1998-01-11
<comment>
XXX Does this return a lobjId? Is that the same as the objOid in other calls? thomas - 1998-01-11
</comment>
</PARA>
</REFSECT2>
</REFSYNOPSISDIV>

View File

@ -1,9 +1,9 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:11 momjian Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.31 2001/09/10 21:58:46 petere Exp $
-->
<chapter id="libpqplusplus">
<title>libpq++ - C++ Binding Library</title>
<title><application>libpq++</application> - C++ Binding Library</title>
<para>
<filename>libpq++</filename> is the C++ API to
@ -74,7 +74,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
<para>
<filename>libpq++</filename> uses only environment variables or
<filename>libpq</>'s <function>PQconnectdb</>
conninfo style strings.
<parameter>conninfo</parameter> style strings.
</para>
</note>
@ -84,7 +84,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
<envar>PGHOST</envar> sets the default server name.
If this begins with a slash, it specifies Unix-domain communication
rather than TCP/IP communication; the value is the name of the
directory in which the socket file is stored (default "/tmp").
directory in which the socket file is stored (default <filename>/tmp</filename>).
</para>
</listitem>
<listitem>
@ -103,7 +103,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
<listitem>
<para>
<envar>PGUSER</envar>
sets the username used to connect to the database and for authentication.
sets the user name used to connect to the database and for authentication.
</para>
</listitem>
<listitem>
@ -133,7 +133,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
</listitem>
<listitem>
<para>
<envar>PGTTY</envar> sets the file or tty on which debugging
<envar>PGTTY</envar> sets the file or <acronym>tty</acronym> on which debugging
messages from the backend server are displayed.
</para>
</listitem>
@ -182,7 +182,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
</sect1>
<sect1 id="libpqpp-classes">
<title>libpq++ Classes</title>
<title><application>libpq++</application> Classes</title>
<sect2>
<title>Connection Class: <classname>PgConnection</classname></title>
@ -199,7 +199,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
<para>
The database class provides C++ objects that have a connection
to a backend server. To create such an object one first needs
the apropriate environment for the backend to access.
the appropriate environment for the backend to access.
The following constructors deal with making a connection to a backend
server from a C++ program.
</para>
@ -218,7 +218,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
PgConnection::PgConnection(const char *conninfo)
</synopsis>
Although typically called from one of the access classes, a connection to
a backend server is possible by creating a PgConnection object.
a backend server is possible by creating a <classname>PgConnection</> object.
</para>
</listitem>
<listitem>
@ -250,10 +250,10 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
<synopsis>
PgDatabase(const char *conninfo)
</synopsis>
After a PgDatabase has been created it should be checked to make sure
the connection to the database succeded before sending
After a <classname>PgDatabase</classname> has been created it should be checked to make sure
the connection to the database succeeded before sending
queries to the object. This can easily be done by
retrieving the current status of the PgDatabase object with the
retrieving the current status of the <classname>PgDatabase</classname> object with the
<function>Status</function> or <function>ConnectionBad</function> methods.
</para>
</listitem>
@ -274,7 +274,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
<synopsis>
PGnotify* PgConnection::Notifies()
</synopsis>
See PQnotifies() for details.
See <function>PQnotifies</function> for details.
</para>
</listitem>
</itemizedlist>
@ -300,28 +300,28 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
<simplelist>
<member>
PGRES_EMPTY_QUERY
<symbol>PGRES_EMPTY_QUERY</symbol>
</member>
<member>
PGRES_COMMAND_OK, if the query was a command
<symbol>PGRES_COMMAND_OK</symbol>, if the query was a command
</member>
<member>
PGRES_TUPLES_OK, if the query successfully returned tuples
<symbol>PGRES_TUPLES_OK</symbol>, if the query successfully returned tuples
</member>
<member>
PGRES_COPY_OUT
<symbol>PGRES_COPY_OUT</symbol>
</member>
<member>
PGRES_COPY_IN
<symbol>PGRES_COPY_IN</symbol>
</member>
<member>
PGRES_BAD_RESPONSE, if an unexpected response was received
<symbol>PGRES_BAD_RESPONSE</symbol>, if an unexpected response was received
</member>
<member>
PGRES_NONFATAL_ERROR
<symbol>PGRES_NONFATAL_ERROR</symbol>
</member>
<member>
PGRES_FATAL_ERROR
<symbol>PGRES_FATAL_ERROR</symbol>
</member>
</simplelist>
</para>
@ -394,7 +394,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
<listitem>
<para>
<function>FieldNum</function>
PQfnumber Returns the field (attribute) index associated with
<function>PQfnumber</function> Returns the field (attribute) index associated with
the given field name.
<synopsis>
int PgDatabase::FieldNum(const char* field_name) const
@ -463,41 +463,41 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
<listitem>
<para>
<function>GetValue</function>
Returns a single field (attribute) value of one tuple of a PGresult.
Returns a single field (attribute) value of one tuple of a <structname>PGresult</structname>.
Tuple and field indices start at 0.
<synopsis>
const char *PgDatabase::GetValue(int tup_num, int field_num) const
</synopsis>
For most queries, the value returned by GetValue is a null-terminated
ASCII string representation of the attribute value. But if BinaryTuples()
is TRUE, the value returned by GetValue is the binary representation
For most queries, the value returned by <function>GetValue</function> is a null-terminated
string representation of the attribute value. But if <function>BinaryTuples</function>
is TRUE, the value returned by <function>GetValue</function> is the binary representation
of the type in the internal format of the backend server (but not including
the size word, if the field is variable-length). It is then the programmer's
responsibility to cast and convert the data to the correct C type. The
pointer returned by GetValue points to storage that is part of the
PGresult structure. One should not modify it, and one must explicitly
pointer returned by <function>GetValue</function> points to storage that is part of the
<structname>PGresult</structname> structure. One should not modify it, and one must explicitly
copy the value into other storage if it is to be used past the lifetime
of the PGresult structure itself. BinaryTuples() is not yet implemented.
of the <structname>PGresult</structname> structure itself. <function>BinaryTuples</function> is not yet implemented.
</para>
</listitem>
<listitem>
<para>
<function>GetValue</function>
Returns a single field (attribute) value of one tuple of a PGresult.
Returns a single field (attribute) value of one tuple of a <structname>PGresult</structname>.
Tuple and field indices start at 0.
<synopsis>
const char *PgDatabase::GetValue(int tup_num, const char *field_name) const
</synopsis>
For most queries, the value returned by GetValue is a null-terminated
ASCII string representation of the attribute value. But if BinaryTuples()
is TRUE, the value returned by GetValue is the binary representation
For most queries, the value returned by <function>GetValue</function> is a null-terminated
string representation of the attribute value. But if <function>BinaryTuples</function>
is TRUE, the value returned by <function>GetValue</function> is the binary representation
of the type in the internal format of the backend server (but not including
the size word, if the field is variable-length). It is then the programmer's
responsibility to cast and convert the data to the correct C type. The
pointer returned by GetValue points to storage that is part of the
PGresult structure. One should not modify it, and one must explicitly
pointer returned by <function>GetValue</function> points to storage that is part of the
<structname>PGresult</structname> structure. One should not modify it, and one must explicitly
copy the value into other storage if it is to be used past the lifetime
of the PGresult structure itself. BinaryTuples() is not yet implemented.
of the <structname>PGresult</structname> structure itself. <function>BinaryTuples</function> is not yet implemented.
</para>
</listitem>
<listitem>
@ -509,9 +509,9 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
int PgDatabase::GetLength(int tup_num, int field_num) const
</synopsis>
This is the actual data length for the particular data value, that
is the size of the object pointed to by GetValue. Note that for
is the size of the object pointed to by <function>GetValue</function>. Note that for
ASCII-represented values, this size has little to do with the binary
size reported by PQfsize.
size reported by <function>PQfsize</function>.
</para>
</listitem>
<listitem>
@ -523,9 +523,9 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
int PgDatabase::GetLength(int tup_num, const char* field_name) const
</synopsis>
This is the actual data length for the particular data value, that
is the size of the object pointed to by GetValue. Note that for
is the size of the object pointed to by <function>GetValue</function>. Note that for
ASCII-represented values, this size has little to do with the binary
size reported by PQfsize.
size reported by <function>PQfsize</function>.
</para>
</listitem>
<listitem>
@ -535,7 +535,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
<synopsis>
bool GetIsNull(int tup_num, int field_num) const
</synopsis>
Note that GetValue will return the empty string for null fields, not
Note that <function>GetValue</function> will return the empty string for null fields, not
the NULL pointer.
</para>
</listitem>
@ -546,7 +546,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
<synopsis>
bool GetIsNull(int tup_num, const char *field_name) const
</synopsis>
Note that GetValue will return the empty string for null fields, not
Note that <function>GetValue</function> will return the empty string for null fields, not
the NULL pointer.
</para>
</listitem>
@ -652,7 +652,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/libpq++.sgml,v 1.30 2001/05/09 17:46:
<note>
<para>
In the past, the documentation has associated the names used for asyncronous
In the past, the documentation has associated the names used for asynchronous
notification with relations or classes. However, there is in fact no
direct linkage of the two concepts in the implementation, and the
named semaphore in fact does not need to have a corresponding relation

View File

@ -1,25 +1,25 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/libpq.sgml,v 1.70 2001/09/08 16:46:34 petere Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/libpq.sgml,v 1.71 2001/09/10 21:58:46 petere Exp $
-->
<chapter id="libpq">
<title>libpq - C Library</title>
<title><application>libpq</application> - C Library</title>
<indexterm zone="libpq">
<primary>libpq</primary>
</indexterm>
<para>
<filename>libpq</filename> is the <acronym>C</acronym>
<application>libpq</application> is the <acronym>C</acronym>
application programmer's interface to
<productname>Postgres</productname>. <filename>libpq</filename> is a set
<productname>PostgreSQL</productname>. <application>libpq</application> is a set
of library routines that allow client programs to pass queries to the
<productname>Postgres</productname> backend server and to receive the
results of these queries. <filename>libpq</filename> is also the
underlying engine for several other <productname>Postgres</productname>
application interfaces, including <filename>libpq++</filename> (C++),
<productname>PostgreSQL</productname> backend server and to receive the
results of these queries. <application>libpq</application> is also the
underlying engine for several other <productname>PostgreSQL</productname>
application interfaces, including <application>libpq++</application> (C++),
<filename>libpgtcl</filename> (Tcl), <productname>Perl</productname>, and
<filename>ecpg</filename>. So some aspects of libpq's behavior will be
<filename>ecpg</filename>. So some aspects of <application>libpq</>'s behavior will be
important to you if you use one of those packages.
</para>
@ -47,7 +47,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/libpq.sgml,v 1.70 2001/09/08 16:46:34 peter
<para>
The following routines deal with making a connection to a
<productname>Postgres</productname> backend server. The
<productname>PostgreSQL</productname> backend server. The
application program can have several backend connections open at
one time. (One reason to do that is to access more than one
database.) Each connection is represented by a
@ -69,10 +69,10 @@ PGconn *PQconnectdb(const char *conninfo)
</synopsis>
This routine opens a new database connection using the parameters taken
from the string <literal>conninfo</literal>. Unlike PQsetdbLogin() below,
from the string <literal>conninfo</literal>. Unlike <function>PQsetdbLogin</> below,
the parameter set can be extended without changing the function signature,
so use either of this routine or the non-blocking analogues PQconnectStart
/ PQconnectPoll is prefered for application programming. The passed string
so use either of this routine or the non-blocking analogues <function>PQconnectStart</>
and <function>PQconnectPoll</function> is preferred for application programming. The passed string
can be empty to use all default parameters, or it can contain one or more
parameter settings separated by whitespace.
</para>
@ -106,25 +106,25 @@ PGconn *PQconnectdb(const char *conninfo)
<listitem>
<para>
IP address of host to connect to. This should be in standard
numbers-and-dots form, as used by the BSD functions inet_aton et al. If
numbers-and-dots form, as used by the BSD functions <function>inet_aton</> et al. If
a non-zero-length string is specified, TCP/IP communication is used.
</para>
<para>
Using hostaddr instead of host allows the application to avoid a host
Using <literal>hostaddr</> instead of host allows the application to avoid a host
name look-up, which may be important in applications with time
constraints. However, Kerberos authentication requires the host
name. The following therefore applies. If host is specified without
hostaddr, a hostname look-up is forced. If hostaddr is specified without
host, the value for hostaddr gives the remote address; if Kerberos is
used, this causes a reverse name query. If both host and hostaddr are
specified, the value for hostaddr gives the remote address; the value
<literal>hostaddr</>, a host name lookup is forced. If <literal>hostaddr</> is specified without
host, the value for <literal>hostaddr</> gives the remote address; if Kerberos is
used, this causes a reverse name query. If both host and <literal>hostaddr</> are
specified, the value for <literal>hostaddr</> gives the remote address; the value
for host is ignored, unless Kerberos is used, in which case that value
is used for Kerberos authentication. Note that authentication is likely
to fail if libpq is passed a host name that is not the name of the
machine at hostaddr.
to fail if <application>libpq</application> is passed a host name that is not the name of the
machine at <literal>hostaddr</>.
</para>
<para>
Without either a host name or host address, libpq will connect using a
Without either a host name or host address, <application>libpq</application> will connect using a
local Unix domain socket.
</para>
</listitem>
@ -135,7 +135,7 @@ PGconn *PQconnectdb(const char *conninfo)
<listitem>
<para>
Port number to connect to at the server host,
or socket filename extension for Unix-domain connections.
or socket file name extension for Unix-domain connections.
</para>
</listitem>
</varlistentry>
@ -180,7 +180,7 @@ PGconn *PQconnectdb(const char *conninfo)
<term><literal>tty</literal></term>
<listitem>
<para>
A file or tty for optional debug output from the backend.
A file or <acronym>tty</acronym> for optional debug output from the backend.
</para>
</listitem>
</varlistentry>
@ -189,7 +189,7 @@ PGconn *PQconnectdb(const char *conninfo)
<term><literal>requiressl</literal></term>
<listitem>
<para>
Set to '1' to require SSL connection to the backend. Libpq
Set to '1' to require SSL connection to the backend. <application>Libpq</>
will then refuse to connect if the server does not support
SSL. Set to '0' (default) to negotiate with server.
</para>
@ -235,8 +235,8 @@ PGconn *PQsetdb(char *pghost,
char *pgtty,
char *dbName)
</synopsis>
This is a macro that calls <function>PQsetdbLogin()</function> with null pointers
for the login and pwd parameters. It is provided primarily
This is a macro that calls <function>PQsetdbLogin</function> with null pointers
for the <parameter>login</> and <parameter>pwd</> parameters. It is provided primarily
for backward compatibility with old programs.
</para>
</listitem>
@ -258,24 +258,24 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn)
</para>
<para>
The database connection is made using the parameters taken from the string
<literal>conninfo</literal>, passed to PQconnectStart. This string is in
the same format as described above for PQconnectdb.
<literal>conninfo</literal>, passed to <function>PQconnectStart</function>. This string is in
the same format as described above for <function>PQconnectdb</function>.
</para>
<para>
Neither PQconnectStart nor PQconnectPoll will block, as long as a number of
Neither <function>PQconnectStart</function> nor <function>PQconnectPoll</function> will block, as long as a number of
restrictions are met:
<itemizedlist>
<listitem>
<para>
The hostaddr and host parameters are used appropriately to ensure that
The <literal>hostaddr</> and <literal>host</> parameters are used appropriately to ensure that
name and reverse name queries are not made. See the documentation of
these parameters under PQconnectdb above for details.
these parameters under <function>PQconnectdb</function> above for details.
</para>
</listitem>
<listitem>
<para>
If you call PQtrace, ensure that the stream object into which you trace
If you call <function>PQtrace</function>, ensure that the stream object into which you trace
will not block.
</para>
</listitem>
@ -283,7 +283,7 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn)
<listitem>
<para>
You ensure for yourself that the socket is in the appropriate state
before calling PQconnectPoll, as described below.
before calling <function>PQconnectPoll</function>, as described below.
</para>
</listitem>
</itemizedlist>
@ -291,68 +291,91 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn)
<para>
To begin, call <literal>conn=PQconnectStart("&lt;connection_info_string&gt;")</literal>.
If conn is NULL, then libpq has been unable to allocate a new PGconn
structure. Otherwise, a valid PGconn pointer is returned (though not yet
If <varname>conn</varname> is NULL, then <application>libpq</> has been unable to allocate a new <structname>PGconn</>
structure. Otherwise, a valid <structname>PGconn</> pointer is returned (though not yet
representing a valid connection to the database). On return from
PQconnectStart, call status=PQstatus(conn). If status equals
CONNECTION_BAD, PQconnectStart has failed.
<function>PQconnectStart</function>, call <literal>status=PQstatus(conn)</literal>. If status equals
<symbol>CONNECTION_BAD</symbol>, <function>PQconnectStart</function> has failed.
</para>
<para>
If PQconnectStart succeeds, the next stage is to poll libpq so that it may
If <function>PQconnectStart</> succeeds, the next stage is to poll <application>libpq</> so that it may
proceed with the connection sequence. Loop thus: Consider a connection
'inactive' by default. If PQconnectPoll last returned PGRES_POLLING_ACTIVE,
consider it 'active' instead. If PQconnectPoll(conn) last returned
PGRES_POLLING_READING, perform a select for reading on PQsocket(conn). If
it last returned PGRES_POLLING_WRITING, perform a select for writing on
PQsocket(conn). If you have yet to call PQconnectPoll, i.e. after the call
to PQconnectStart, behave as if it last returned PGRES_POLLING_WRITING. If
the select shows that the socket is ready, consider it 'active'. If it has
been decided that this connection is 'active', call PQconnectPoll(conn)
again. If this call returns PGRES_POLLING_FAILED, the connection procedure
has failed. If this call returns PGRES_POLLING_OK, the connection has been
<quote>inactive</quote> by default. If <function>PQconnectPoll</function> last returned <symbol>PGRES_POLLING_ACTIVE</>,
consider it <quote>active</quote> instead. If <function>PQconnectPoll(conn)</function> last returned
<symbol>PGRES_POLLING_READING</symbol>, perform a select for reading on <function>PQsocket(conn)</function>. If
it last returned <symbol>PGRES_POLLING_WRITING</symbol>, perform a select for writing on
<function>PQsocket(conn)</function>. If you have yet to call <function>PQconnectPoll</function>, i.e. after the call
to <function>PQconnectStart</function>, behave as if it last returned <symbol>PGRES_POLLING_WRITING</symbol>. If
the select shows that the socket is ready, consider it <quote>active</quote>. If it has
been decided that this connection is <quote>active</quote>, call <function>PQconnectPoll(conn)</function>
again. If this call returns <symbol>PGRES_POLLING_FAILED</symbol>, the connection procedure
has failed. If this call returns <symbol>PGRES_POLLING_OK</symbol>, the connection has been
successfully made.
</para>
<para>
Note that the use of select() to ensure that the socket is ready is merely
Note that the use of <function>select()</function> to ensure that the socket is ready is merely
a (likely) example; those with other facilities available, such as a
poll() call, may of course use that instead.
<function>poll()</function> call, may of course use that instead.
</para>
<para>
At any time during connection, the status of the connection may be
checked, by calling PQstatus. If this is CONNECTION_BAD, then the
connection procedure has failed; if this is CONNECTION_OK, then the
checked, by calling <function>PQstatus</>. If this is <symbol>CONNECTION_BAD</>, then the
connection procedure has failed; if this is <function>CONNECTION_OK</>, then the
connection is ready. Either of these states should be equally detectable
from the return value of PQconnectPoll, as above. Other states may be
from the return value of <function>PQconnectPoll</>, as above. Other states may be
shown during (and only during) an asynchronous connection procedure. These
indicate the current stage of the connection procedure, and may be useful
to provide feedback to the user for example. These statuses may include:
<itemizedlist>
<listitem>
<para>
CONNECTION_STARTED: Waiting for connection to be made.
</para>
</listitem>
<listitem>
<para>
CONNECTION_MADE: Connection OK; waiting to send.
</para>
</listitem>
<listitem>
<para>
CONNECTION_AWAITING_RESPONSE: Waiting for a response from the postmaster.
</para>
</listitem>
<listitem>
<para>
CONNECTION_AUTH_OK: Received authentication; waiting for backend start-up.
</para>
</listitem>
<listitem>
<para>
CONNECTION_SETENV: Negotiating environment.
</para>
</listitem>
</itemizedlist>
<variablelist>
<varlistentry>
<term><symbol>CONNECTION_STARTED</symbol></term>
<listitem>
<para>
Waiting for connection to be made.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><symbol>CONNECTION_MADE</symbol></term>
<listitem>
<para>
Connection OK; waiting to send.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><symbol>CONNECTION_AWAITING_RESPONSE</symbol></term>
<listitem>
<para>
Waiting for a response from the postmaster.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><symbol>CONNECTION_AUTH_OK</symbol></term>
<listitem>
<para>
Received authentication; waiting for backend start-up.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><symbol>CONNECTION_SETENV</symbol></term>
<listitem>
<para>
Negotiating environment.
</para>
</listitem>
</varlistentry>
</variablelist>
Note that, although these constants will remain (in order to maintain
compatibility) an application should never rely upon these appearing in a
@ -376,16 +399,20 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn)
}
</programlisting>
</para>
<para>
Note that if PQconnectStart returns a non-NULL pointer, you must call
PQfinish when you are finished with it, in order to dispose of
Note that if <function>PQconnectStart</function> returns a non-NULL pointer, you must call
<function>PQfinish</function> when you are finished with it, in order to dispose of
the structure and any associated memory blocks. This must be done even if a
call to PQconnectStart or PQconnectPoll failed.
call to <function>PQconnectStart</function> or <function>PQconnectPoll</function> failed.
</para>
<para>
PQconnectPoll will currently block if libpq is compiled with USE_SSL
<function>PQconnectPoll</function> will currently block if
<application>libpq</> is compiled with <symbol>USE_SSL</symbol>
defined. This restriction may be removed in the future.
</para>
<para>
These functions leave the socket in a non-blocking state as if
<function>PQsetnonblocking</function> had been called.
@ -414,22 +441,24 @@ struct PQconninfoOption
}
</synopsis>
Returns a connection options array. This may
be used to determine all possible PQconnectdb options and their
be used to determine all possible <function>PQconnectdb</function> options and their
current default values. The return value points to an array of
PQconninfoOption structs, which ends with an entry having a NULL
keyword pointer. Note that the default values ("val" fields)
<structname>PQconninfoOption</structname> structs, which ends with an entry having a NULL
keyword pointer. Note that the default values (<structfield>val</structfield> fields)
will depend on environment variables and other context.
Callers must treat the connection options data as read-only.
</para>
<para>
After processing the options array, free it by passing it to
PQconninfoFree(). If this is not done, a small amount of memory
is leaked for each call to PQconndefaults().
<function>PQconninfoFree</function>. If this is not done, a small amount of memory
is leaked for each call to <function>PQconndefaults</function>.
</para>
<para>
In Postgres versions before 7.0, PQconndefaults() returned a pointer
In <productname>PostgreSQL</productname> versions before 7.0, <function>PQconndefaults</function> returned a pointer
to a static array, rather than a dynamically allocated array. That
wasn't thread-safe, so the behavior has been changed.
was not thread-safe, so the behavior has been changed.
</para>
</listitem>
@ -437,14 +466,14 @@ struct PQconninfoOption
<para>
<function>PQfinish</function>
Close the connection to the backend. Also frees
memory used by the PGconn object.
memory used by the <structname>PGconn</structname> object.
<synopsis>
void PQfinish(PGconn *conn)
</synopsis>
Note that even if the backend connection attempt fails (as
indicated by PQstatus), the application should call PQfinish
to free the memory used by the PGconn object.
The PGconn pointer should not be used after PQfinish has been called.
indicated by <function>PQstatus</function>), the application should call <function>PQfinish</function>
to free the memory used by the <structname>PGconn</structname> object.
The <structname>PGconn</> pointer should not be used after <function>PQfinish</function> has been called.
</para>
</listitem>
@ -477,14 +506,14 @@ PostgresPollingStatusType PQresetPoll(PGconn *conn);
These functions will close the connection to the backend and attempt to
reestablish a new connection to the same postmaster, using all the same
parameters previously used. This may be useful for error recovery if a
working connection is lost. They differ from PQreset (above) in that they
working connection is lost. They differ from <function>PQreset</function> (above) in that they
act in a non-blocking manner. These functions suffer from the same
restrictions as PQconnectStart and PQconnectPoll.
restrictions as <function>PQconnectStart</> and <function>PQconnectPoll</>.
</para>
<para>
Call PQresetStart. If it returns 0, the reset has failed. If it returns 1,
poll the reset using PQresetPoll in exactly the same way as you would
create the connection using PQconnectPoll.
Call <function>PQresetStart</function>. If it returns 0, the reset has failed. If it returns 1,
poll the reset using <function>PQresetPoll</function> in exactly the same way as you would
create the connection using <function>PQconnectPoll</function>.
</para>
</listitem>
@ -492,13 +521,13 @@ PostgresPollingStatusType PQresetPoll(PGconn *conn);
</para>
<para>
libpq application programmers should be careful to
maintain the PGconn abstraction. Use the accessor functions below to get
at the contents of PGconn. Avoid directly referencing the fields of the
PGconn structure because they are subject to change in the future.
(Beginning in <productname>Postgres</productname> release 6.4, the
definition of struct PGconn is not even provided in <filename>libpq-fe.h</filename>.
If you have old code that accesses PGconn fields directly, you can keep using it
<application>libpq</application> application programmers should be careful to
maintain the <structname>PGconn</structname> abstraction. Use the accessor functions below to get
at the contents of <structname>PGconn</structname>. Avoid directly referencing the fields of the
<structname>PGconn</> structure because they are subject to change in the future.
(Beginning in <productname>PostgreSQK</productname> release 6.4, the
definition of struct <structname>PGconn</structname> is not even provided in <filename>libpq-fe.h</filename>.
If you have old code that accesses <structname>PGconn</structname> fields directly, you can keep using it
by including <filename>libpq-int.h</filename> too, but you are encouraged to fix the code
soon.)
<itemizedlist>
@ -509,8 +538,8 @@ soon.)
<synopsis>
char *PQdb(const PGconn *conn)
</synopsis>
PQdb and the next several functions return the values established
at connection. These values are fixed for the life of the PGconn
<function>PQdb</> and the next several functions return the values established
at connection. These values are fixed for the life of the <structname>PGconn</>
object.
</para>
</listitem>
@ -558,7 +587,7 @@ char *PQport(const PGconn *conn)
<listitem>
<para>
<function>PQtty</function>
Returns the debug tty of the connection.
Returns the debug <acronym>tty</acronym> of the connection.
<synopsis>
char *PQtty(const PGconn *conn)
</synopsis>
@ -603,7 +632,7 @@ ConnStatusType PQstatus(const PGconn *conn)
</para>
<para>
See the entry for PQconnectStart and PQconnectPoll with regards
See the entry for <function>PQconnectStart</> and <function>PQconnectPoll</> with regards
to other status codes
that might be seen.
</para>
@ -620,9 +649,9 @@ char *PQerrorMessage(const PGconn* conn);
</para>
<para>
Nearly all libpq functions will set
Nearly all <application>libpq</> functions will set
<function>PQerrorMessage</function> if they fail.
Note that by libpq convention, a non-empty
Note that by <application>libpq</application> convention, a non-empty
<function>PQerrorMessage</function> will
include a trailing newline.
</para>
@ -655,13 +684,13 @@ int PQbackendPID(const PGconn *conn);
SSL *PQgetssl(const PGconn *conn);
</synopsis>
This structure can be used to verify encryption levels, check
server certificate and more. Refer to the OpenSSL documentation
server certificate and more. Refer to the SSL documentation
for information about this structure.
</para>
<para>
You must define <literal>USE_SSL</literal> in order to get the
prototype for this function. Doing this will also
automatically include <filename>ssl.h</filename> from OpenSSL.
automatically include <filename>ssl.h</filename> from <productname>OpenSSL</productname>.
</para>
</listitem>
@ -684,19 +713,19 @@ SQL queries and commands.
<listitem>
<para>
<function>PQexec</function>
Submit a query to <productname>Postgres</productname>
Submit a query to the server
and wait for the result.
<synopsis>
PGresult *PQexec(PGconn *conn,
const char *query);
</synopsis>
Returns a PGresult pointer or possibly a NULL pointer.
Returns a <structname>PGresult</structname> pointer or possibly a NULL pointer.
A non-NULL pointer will generally be returned except in
out-of-memory conditions or serious errors such as inability
to send the query to the backend.
If a NULL is returned, it
should be treated like a PGRES_FATAL_ERROR result. Use
PQerrorMessage to get more information about the error.
should be treated like a <symbol>PGRES_FATAL_ERROR</symbol> result. Use
<function>PQerrorMessage</function> to get more information about the error.
</para>
</listitem>
</itemizedlist>
@ -705,13 +734,13 @@ PGresult *PQexec(PGconn *conn,
The <function>PGresult</function> structure encapsulates the query result
returned by the backend.
<filename>libpq</filename> application programmers should be careful to
maintain the PGresult abstraction. Use the accessor functions below to get
at the contents of PGresult. Avoid directly referencing the fields of the
PGresult structure because they are subject to change in the future.
(Beginning in <productname>Postgres</productname> release 6.4, the
definition of struct PGresult is not even provided in libpq-fe.h. If you
have old code that accesses PGresult fields directly, you can keep using it
by including libpq-int.h too, but you are encouraged to fix the code
maintain the <structname>PGresult</structname> abstraction. Use the accessor functions below to get
at the contents of <structname>PGresult</structname>. Avoid directly referencing the fields of the
<structname>PGresult</structname> structure because they are subject to change in the future.
(Beginning in <productname>PostgreSQL</productname> 6.4, the
definition of struct <structname>PGresult</structname> is not even provided in <filename>libpq-fe.h</>. If you
have old code that accesses <structname>PGresult</structname> fields directly, you can keep using it
by including <filename>libpq-int.h</filename> too, but you are encouraged to fix the code
soon.)
</para>
@ -723,7 +752,8 @@ soon.)
<synopsis>
ExecStatusType PQresultStatus(const PGresult *res)
</synopsis>
PQresultStatus can return one of the following values:
<function>PQresultStatus</function> can return one of the following values:
<itemizedlist>
<listitem>
<para><literal>PGRES_EMPTY_QUERY</literal> -- The string sent to the backend was empty.</para>
@ -783,10 +813,10 @@ char *PQresultErrorMessage(const PGresult *res);
Immediately following a <function>PQexec</function> or <function>PQgetResult</function>
call, <function>PQerrorMessage</function> (on the connection) will return the same
string as <function>PQresultErrorMessage</function> (on the result). However, a
PGresult will retain its error message
<structname>PGresult</structname> will retain its error message
until destroyed, whereas the connection's error message will change when
subsequent operations are done. Use <function>PQresultErrorMessage</function> when you want to
know the status associated with a particular PGresult; use <function>PQerrorMessage</function>
know the status associated with a particular <structname>PGresult</structname>; use <function>PQerrorMessage</function>
when you want to know the status from the latest operation on the connection.
</para>
</listitem>
@ -794,13 +824,13 @@ when you want to know the status from the latest operation on the connection.
<listitem>
<para>
<function>PQclear</function>
Frees the storage associated with the PGresult.
Every query result should be freed via PQclear when
Frees the storage associated with the <structname>PGresult</structname>.
Every query result should be freed via <function>PQclear</function> when
it is no longer needed.
<synopsis>
void PQclear(PQresult *res);
</synopsis>
You can keep a PGresult object around for as long as you
You can keep a <structname>PGresult</structname> object around for as long as you
need it; it does not go away when you issue a new query,
nor even if you close the connection. To get rid of it,
you must call <function>PQclear</function>. Failure to do this will
@ -811,17 +841,17 @@ void PQclear(PQresult *res);
<listitem>
<para>
<function>PQmakeEmptyPGresult</function>
Constructs an empty PGresult object with the given status.
Constructs an empty <structname>PGresult</structname> object with the given status.
<synopsis>
PGresult* PQmakeEmptyPGresult(PGconn *conn, ExecStatusType status);
</synopsis>
This is libpq's internal routine to allocate and initialize an empty
PGresult object. It is exported because some applications find it
This is <application>libpq</>'s internal routine to allocate and initialize an empty
<structname>PGresult</structname> object. It is exported because some applications find it
useful to generate result objects (particularly objects with error
status) themselves. If conn is not NULL and status indicates an error,
the connection's current errorMessage is copied into the PGresult.
Note that PQclear should eventually be called on the object, just
as with a PGresult returned by libpq itself.
status) themselves. If <parameter>conn</parameter> is not NULL and status indicates an error,
the connection's current errorMessage is copied into the <structname>PGresult.</structname>
Note that <function>PQclear</function> should eventually be called on the object, just
as with a <structname>PGresult</structname> returned by <application>libpq</application> itself.
</para>
</listitem>
</itemizedlist>
@ -931,8 +961,8 @@ Oid PQftype(const PGresult *res,
int field_index);
</synopsis>
You can query the system table <literal>pg_type</literal> to obtain
the name and properties of the various datatypes. The <acronym>OID</acronym>s
of the built-in datatypes are defined in <filename>src/include/catalog/pg_type.h</filename>
the name and properties of the various data types. The <acronym>OID</acronym>s
of the built-in data types are defined in <filename>src/include/catalog/pg_type.h</filename>
in the source tree.
</para>
</listitem>
@ -960,7 +990,7 @@ int PQfmod(const PGresult *res,
int PQfsize(const PGresult *res,
int field_index);
</synopsis>
PQfsize returns the space allocated for this field in a database
<function>PQfsize</> returns the space allocated for this field in a database
tuple, in other words the size of the server's binary representation
of the data type. -1 is returned if the field is variable size.
</para>
@ -990,7 +1020,7 @@ extracts data from a <acronym>BINARY</acronym> cursor.
<para>
<function>PQgetvalue</function>
Returns a single field (attribute) value of one tuple
of a PGresult.
of a <structname>PGresult</structname>.
Tuple and field indices start at 0.
<synopsis>
char* PQgetvalue(const PGresult *res,
@ -1007,10 +1037,10 @@ type in the internal format of the backend server
It is then the programmer's responsibility to cast and
convert the data to the correct C type. The pointer
returned by <function>PQgetvalue</function> points to storage that is
part of the PGresult structure. One should not modify it,
part of the <structname>PGresult</structname> structure. One should not modify it,
and one must explicitly
copy the value into other storage if it is to
be used past the lifetime of the PGresult structure itself.
be used past the lifetime of the <structname>PGresult</structname> structure itself.
</para>
</listitem>
@ -1025,7 +1055,7 @@ int PQgetisnull(const PGresult *res,
int field_num);
</synopsis>
This function returns 1 if the field contains a NULL, 0 if
it contains a non-null value. (Note that PQgetvalue
it contains a non-null value. (Note that <function>PQgetvalue</function>
will return an empty string, not a null pointer, for a NULL
field.)
</para>
@ -1042,8 +1072,8 @@ int PQgetlength(const PGresult *res,
int field_num);
</synopsis>
This is the actual data length for the particular data value, that is the
size of the object pointed to by PQgetvalue. Note that for ASCII-represented
values, this size has little to do with the binary size reported by PQfsize.
size of the object pointed to by <function>PQgetvalue</function>. Note that for character-represented
values, this size has little to do with the binary size reported by <function>PQfsize</function>.
</para>
</listitem>
@ -1086,7 +1116,7 @@ function is no longer actively supported.
<para>
<function>PQcmdStatus</function>
Returns the command status string from the SQL command that
generated the PGresult.
generated the <structname>PGresult</structname>.
<synopsis>
char * PQcmdStatus(const PGresult *res);
</synopsis>
@ -1101,7 +1131,7 @@ char * PQcmdStatus(const PGresult *res);
char * PQcmdTuples(const PGresult *res);
</synopsis>
If the <acronym>SQL</acronym> command that generated the
PGresult was INSERT, UPDATE or DELETE, this returns a
<structname>PGresult</structname> was INSERT, UPDATE or DELETE, this returns a
string containing the number of rows affected. If the
command was anything else, it returns the empty string.
</para>
@ -1168,8 +1198,8 @@ done from a signal handler, but not otherwise.)
</listitem>
<listitem>
<para>
<function>PQexec</function> can return only one PGresult structure. If the submitted query
string contains multiple <acronym>SQL</acronym> commands, all but the last PGresult are
<function>PQexec</function> can return only one <structname>PGresult</structname> structure. If the submitted query
string contains multiple <acronym>SQL</acronym> commands, all but the last <structname>PGresult</structname> are
discarded by <function>PQexec</function>.
</para>
</listitem>
@ -1202,8 +1232,8 @@ connection to the backend.
<synopsis>
int PQsetnonblocking(PGconn *conn, int arg)
</synopsis>
Sets the state of the connection to nonblocking if arg is TRUE,
blocking if arg is FALSE. Returns 0 if OK, -1 if error.
Sets the state of the connection to nonblocking if <parameter>arg</parameter> is 1,
blocking if <parameter>arg</parameter> is 0. Returns 0 if OK, -1 if error.
</para>
<para>
In the nonblocking state, calls to
@ -1219,7 +1249,7 @@ int PQsetnonblocking(PGconn *conn, int arg)
completes.
</para>
<para>
More of libpq is expected to be made safe for
More of <application>libpq</application> is expected to be made safe for
<function>PQsetnonblocking</function> functionality in the near future.
</para>
</listitem>
@ -1231,17 +1261,17 @@ int PQsetnonblocking(PGconn *conn, int arg)
<synopsis>
int PQisnonblocking(const PGconn *conn)
</synopsis>
Returns TRUE if the connection is set to non-blocking mode,
FALSE if blocking.
Returns 1 if the connection is set to non-blocking mode,
0 if blocking.
</para>
</listitem>
<listitem>
<para>
<function>PQsendQuery</function>
Submit a query to <productname>Postgres</productname> without
waiting for the result(s). TRUE is returned if the query was
successfully dispatched, FALSE if not (in which case, use
Submit a query to <productname>PostgreSQL</productname> without
waiting for the result(s). 1 is returned if the query was
successfully dispatched, 0 if not (in which case, use
PQerrorMessage to get more information about the failure).
<synopsis>
int PQsendQuery(PGconn *conn,
@ -1400,7 +1430,7 @@ can also attempt to cancel a query that is still being processed by the backend.
<listitem>
<para>
<function>PQrequestCancel</function>
Request that <productname>Postgres</productname> abandon
Request that <productname>PostgreSQL</productname> abandon
processing of the current query.
<synopsis>
int PQrequestCancel(PGconn *conn);
@ -1430,7 +1460,7 @@ will abort the whole transaction.
So, it is also possible to use it in conjunction with plain
<function>PQexec</function>, if the decision to cancel can be made in a signal
handler. For example, <application>psql</application> invokes
<function>PQrequestCancel</function> from a SIGINT signal handler, thus allowing
<function>PQrequestCancel</function> from a <systemitem>SIGINT</> signal handler, thus allowing
interactive cancellation of queries that it issues through <function>PQexec</function>.
Note that <function>PQrequestCancel</function> will have no effect if the connection
is not currently open or the backend is not currently processing a query.
@ -1442,7 +1472,7 @@ is not currently open or the backend is not currently processing a query.
<title>Fast Path</title>
<para>
<productname>Postgres</productname> provides a fast path interface to send
<productname>PostgreSQL</productname> provides a fast path interface to send
function calls to the backend. This is a trapdoor into system internals and
can be a potential security hole. Most users will not need this feature.
@ -1460,19 +1490,19 @@ PGresult* PQfn(PGconn* conn,
const PQArgBlock *args,
int nargs);
</synopsis>
The fnid argument is the object identifier of the function to be
The <parameter>fnid</> argument is the object identifier of the function to be
executed.
result_buf is the buffer in which
<parameter>result_buf</parameter> is the buffer in which
to place the return value. The caller must have allocated
sufficient space to store the return value (there is no check!).
The actual result length will be returned in the integer pointed
to by result_len. If a 4-byte integer result is expected, set
result_is_int to 1; otherwise set it to 0. (Setting result_is_int to 1
tells libpq to byte-swap the value if necessary, so that it is
to by <parameter>result_len</parameter>. If a 4-byte integer result is expected, set
<parameter>result_is_int</parameter> to 1; otherwise set it to 0. (Setting <parameter>result_is_int</parameter> to 1
tells <application>libpq</> to byte-swap the value if necessary, so that it is
delivered as a proper int value for the client machine. When
result_is_int is 0, the byte string sent by the backend is returned
unmodified.)
args and nargs specify the arguments to be passed to the function.
<parameter>args</> and <parameter>nargs</> specify the arguments to be passed to the function.
<synopsis>
typedef struct {
int len;
@ -1483,9 +1513,9 @@ typedef struct {
} u;
} PQArgBlock;
</synopsis>
<function>PQfn</function> always returns a valid PGresult*. The resultStatus
<function>PQfn</function> always returns a valid <structname>PGresult*</structname>. The resultStatus
should be checked before the result is used. The
caller is responsible for freeing the PGresult with
caller is responsible for freeing the <structname>PGresult</structname> with
<function>PQclear</function> when it is no longer needed.
</para>
</listitem>
@ -1498,11 +1528,11 @@ typedef struct {
<title>Asynchronous Notification</title>
<para>
<productname>Postgres</productname> supports asynchronous notification via the
LISTEN and NOTIFY commands. A backend registers its interest in a particular
notification condition with the LISTEN command (and can stop listening
with the UNLISTEN command). All backends listening on a
particular condition will be notified asynchronously when a NOTIFY of that
<productname>PostgreSQL</productname> supports asynchronous notification via the
<command>LISTEN</command> and <command>NOTIFY</command> commands. A backend registers its interest in a particular
notification condition with the <command>LISTEN</command> command (and can stop listening
with the <command>UNLISTEN</command> command). All backends listening on a
particular condition will be notified asynchronously when a <command>NOTIFY</command> of that
condition name is executed by any backend. No additional information is
passed from the notifier to the listener. Thus, typically, any actual data
that needs to be communicated is transferred through a database relation.
@ -1511,9 +1541,9 @@ not necessary for there to be any associated relation.
</para>
<para>
<filename>libpq</filename> applications submit LISTEN and UNLISTEN
commands as ordinary SQL queries. Subsequently, arrival of NOTIFY
messages can be detected by calling PQnotifies().
<filename>libpq</filename> applications submit <command>LISTEN</command> and <command>UNLISTEN</command>
commands as ordinary SQL queries. Subsequently, arrival of <command>NOTIFY</command>
messages can be detected by calling <function>PQnotifies</function>.
<itemizedlist>
<listitem>
@ -1533,14 +1563,14 @@ typedef struct pgNotify {
int be_pid; /* process id of backend */
} PGnotify;
</synopsis>
After processing a PGnotify object returned by <function>PQnotifies</function>,
After processing a <structname>PGnotify</structname> object returned by <function>PQnotifies</function>,
be sure to free it with <function>free()</function> to avoid a memory leak.
</para>
<note>
<para>
In <productname>Postgres</productname> 6.4 and later,
the <literal>be_pid</literal> is the notifying backend's,
whereas in earlier versions it was always your own backend's <acronym>PID</acronym>.
In <productname>PostgreSQL</productname> 6.4 and later,
the <literal>be_pid</literal> is that of the notifying backend,
whereas in earlier versions it was always the <acronym>PID</acronym> of your own backend.
</para>
</note>
</listitem>
@ -1589,7 +1619,7 @@ if any notifications came in during the processing of the query.
</indexterm>
<para>
The COPY command in <productname>Postgres</productname> has options to read from
The COPY command in <productname>PostgreSQL</productname> has options to read from
or write to the network connection used by <filename>libpq</filename>.
Therefore, functions are necessary to access this network
connection directly so applications may take advantage of this capability.
@ -1660,7 +1690,7 @@ application should call <function>PQconsumeInput</function> and
end-of-data signal is detected. Unlike <function>PQgetline</function>, this routine takes
responsibility for detecting end-of-data.
On each call, <function>PQgetlineAsync</function> will return data if a complete newline-
terminated data line is available in libpq's input buffer, or if the
terminated data line is available in <application>libpq</>'s input buffer, or if the
incoming data line is too long to fit in the buffer offered by the caller.
Otherwise, no data is returned until the rest of the line arrives.
</para>
@ -1675,7 +1705,7 @@ the caller is too small to hold a line sent by the backend, then a partial
data line will be returned. This can be detected by testing whether the
last returned byte is "<literal>\n</literal>" or not.
The returned string is not null-terminated. (If you want to add a
terminating null, be sure to pass a bufsize one smaller than the room
terminating null, be sure to pass a <parameter>bufsize</parameter> one smaller than the room
actually available.)
</para>
</listitem>
@ -1788,7 +1818,7 @@ void PQtrace(PGconn *conn
<listitem>
<para>
<function>PQuntrace</function>
Disable tracing started by PQtrace
Disable tracing started by <function>PQtrace</function>.
<synopsis>
void PQuntrace(PGconn *conn)
</synopsis>
@ -1842,7 +1872,7 @@ defaultNoticeProcessor(void * arg, const char * message)
</programlisting>
To use a special notice processor, call
<function>PQsetNoticeProcessor</function> just after
creation of a new PGconn object.
creation of a new <structname>PGconn</> object.
</para>
<para>
@ -1853,9 +1883,9 @@ but the current pointer is returned.
<para>
Once you have set a notice processor, you should expect that that function
could be called as long as either the PGconn object or PGresult objects
made from it exist. At creation of a PGresult, the PGconn's current
notice processor pointer is copied into the PGresult for possible use by
could be called as long as either the <structname>PGconn</> object or <structname>PGresult</> objects
made from it exist. At creation of a <structname>PGresult</>, the <structname>PGconn</>'s current
notice processor pointer is copied into the <structname>PGresult</> for possible use by
routines like <function>PQgetvalue</function>.
</para>
@ -1884,7 +1914,7 @@ application programs.
<envar>PGHOST</envar> sets the default server name.
If this begins with a slash, it specifies Unix-domain communication
rather than TCP/IP communication; the value is the name of the
directory in which the socket file is stored (default "/tmp").
directory in which the socket file is stored (default <filename>/tmp</filename>).
</para>
</listitem>
<listitem>
@ -1894,7 +1924,7 @@ directory in which the socket file is stored (default "/tmp").
</indexterm>
<envar>PGPORT</envar> sets the default TCP port number or Unix-domain
socket file extension for communicating with the
<productname>Postgres</productname> backend.
<productname>PostgreSQL</productname> backend.
</para>
</listitem>
<listitem>
@ -1903,7 +1933,7 @@ socket file extension for communicating with the
<primary><envar>PGDATABASE</envar></primary>
</indexterm>
<envar>PGDATABASE</envar> sets the default
<productname>Postgres</productname> database name.
<productname>PostgreSQL</productname> database name.
</para>
</listitem>
<listitem>
@ -1912,7 +1942,7 @@ socket file extension for communicating with the
<primary><envar>PGUSER</envar></primary>
</indexterm>
<envar>PGUSER</envar>
sets the username used to connect to the database and for authentication.
sets the user name used to connect to the database and for authentication.
</para>
</listitem>
<listitem>
@ -1927,8 +1957,8 @@ sets the password used if the backend demands password authentication.
<listitem>
<para>
<envar>PGREALM</envar> sets the Kerberos realm to use with
<productname>Postgres</productname>, if it is different from the local realm.
If <envar>PGREALM</envar> is set, <productname>Postgres</productname>
<productname>PostgreSQL</productname>, if it is different from the local realm.
If <envar>PGREALM</envar> is set, <productname>PostgreSQL</productname>
applications will attempt authentication with servers for this realm and use
separate ticket files to avoid conflicts with local
ticket files. This environment variable is only
@ -1938,7 +1968,7 @@ used if Kerberos authentication is selected by the backend.
<listitem>
<para>
<envar>PGOPTIONS</envar> sets additional runtime options for
the <productname>Postgres</productname> backend.
the <productname>PostgreSQL</productname> backend.
</para>
</listitem>
<listitem>
@ -1952,7 +1982,7 @@ messages from the backend server are displayed.
<para>
The following environment variables can be used to specify user-level default
behavior for every Postgres session:
behavior for every <productname>PostgreSQL</productname> session:
<itemizedlist>
<listitem>
@ -1971,7 +2001,7 @@ sets the default time zone.
<para>
<envar>PGCLIENTENCODING</envar>
sets the default client encoding (if MULTIBYTE support was selected
when configuring Postgres).
when configuring <productname>PostgreSQL</productname>).
</para>
</listitem>
</itemizedlist>
@ -1979,7 +2009,7 @@ when configuring Postgres).
<para>
The following environment variables can be used to specify default internal
behavior for every Postgres session:
behavior for every <productname>PostgreSQL</productname> session:
<itemizedlist>
<listitem>
@ -2008,22 +2038,22 @@ for information on correct values for these environment variables.
<para>
<filename>libpq</filename> is thread-safe as of
<productname>Postgres</productname> 7.0, so long as no two threads
attempt to manipulate the same PGconn object at the same time. In particular,
you can't issue concurrent queries from different threads through the same
<productname>PostgreSQL</productname> 7.0, so long as no two threads
attempt to manipulate the same <structname>PGconn</> object at the same time. In particular,
you cannot issue concurrent queries from different threads through the same
connection object. (If you need to run concurrent queries, start up multiple
connections.)
</para>
<para>
PGresult objects are read-only after creation, and so can be passed around
<structname>PGresult</> objects are read-only after creation, and so can be passed around
freely between threads.
</para>
<para>
The deprecated functions <function>PQoidStatus</function> and
<function>fe_setauthsvc</function> are not thread-safe and should not be
used in multi-thread programs. <function>PQoidStatus</function> can be
used in multithread programs. <function>PQoidStatus</function> can be
replaced by <function>PQoidValue</function>. There is no good reason to
call <function>fe_setauthsvc</function> at all.
</para>
@ -2032,10 +2062,10 @@ call <function>fe_setauthsvc</function> at all.
<sect1 id="libpq-build">
<title>Building Libpq Programs</title>
<title>Building <application>Libpq</application> Programs</title>
<para>
To build (i.e., compile and link) your libpq programs you need to
To build (i.e., compile and link) your <application>libpq</application> programs you need to
do all of the following things:
<itemizedlist>
@ -2101,10 +2131,10 @@ testlibpq.c:8:22: libpq-fe.h: No such file or directory
<listitem>
<para>
When linking the final program, specify the option
<literal>-lpq</literal> so that the libpq library gets pulled
<literal>-lpq</literal> so that the <application>libpq</application> library gets pulled
in, as well as the option
<literal>-L<replaceable>directory</replaceable></literal> to
point it to the directory where libpq resides. (Again, the
point it to the directory where the <application>libpq</application> library resides. (Again, the
compiler will search some directories by default.) For maximum
portability, put the <option>-L</option> option before the
<option>-lpq</option> option. For example:
@ -2158,7 +2188,7 @@ testlibpq.o(.text+0xa4): undefined reference to `PQerrorMessage'
<title>Example Programs</title>
<example id="libpq-example-1">
<title>libpq Example Program 1</title>
<title><application>libpq</application> Example Program 1</title>
<programlisting>
/*
@ -2291,7 +2321,7 @@ main()
</example>
<example id="libpq-example-2">
<title>libpq Example Program 2</title>
<title><application>libpq</application> Example Program 2</title>
<programlisting>
/*
@ -2411,7 +2441,7 @@ main()
</example>
<example id="libpq-example-3">
<title>libpq Example Program 3</>
<title><application>libpq</application> Example Program 3</>
<programlisting>
/*

View File

@ -1,5 +1,5 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/lobj.sgml,v 1.18 2001/09/10 04:15:41 momjian Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/lobj.sgml,v 1.19 2001/09/10 21:58:47 petere Exp $
-->
<chapter id="largeObjects">
@ -40,7 +40,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/lobj.sgml,v 1.18 2001/09/10 04:15:41 momjia
objects interchangeably to mean the same thing in this
section.)
Since <productname>PostgreSQL 7.1</productname> all large objects are placed in
one system table called pg_largeobject.
one system table called <classname>pg_largeobject</classname>.
</para>
</sect1>
@ -87,9 +87,9 @@ $Header: /cvsroot/pgsql/doc/src/sgml/lobj.sgml,v 1.18 2001/09/10 04:15:41 momjia
<function>lseek(2)</function>, etc. User
functions call these routines to retrieve only the data of
interest from a large object. For example, if a large
object type called mugshot existed that stored
object type called <type>mugshot</type> existed that stored
photographs of faces, then a function called beard could
be declared on mugshot data. Beard could look at the
be declared on <type>mugshot</type> data. Beard could look at the
lower third of a photograph, and determine the color of
the beard that appeared there, if any. The entire
large object value need not be buffered, or even
@ -110,13 +110,13 @@ $Header: /cvsroot/pgsql/doc/src/sgml/lobj.sgml,v 1.18 2001/09/10 04:15:41 momjia
Oid lo_creat(PGconn *<replaceable class="parameter">conn</replaceable>, int <replaceable class="parameter">mode</replaceable>)
</synopsis>
creates a new large object.
<replaceable class="parameter">mode</replaceable> is a bitmask
<replaceable class="parameter">mode</replaceable> is a bit mask
describing several different attributes of the new
object. The symbolic constants listed here are defined
in
<filename>$<envar>PGROOT</envar>/src/backend/libpq/libpq-fs.h</filename>
The access type (read, write, or both) is controlled by
OR ing together the bits <acronym>INV_READ</acronym> and
OR'ing together the bits <acronym>INV_READ</acronym> and
<acronym>INV_WRITE</acronym>. The low-order sixteen bits of mask are
the storage manager number on which the large object
should reside. For sites other than Berkeley, these
@ -137,7 +137,7 @@ inv_oid = lo_creat(INV_READ|INV_WRITE);
Oid lo_import(PGconn *<replaceable class="parameter">conn</replaceable>, const char *<replaceable class="parameter">filename</replaceable>)
</synopsis>
<replaceable class="parameter">filename</replaceable>
specifies the <acronym>Unix</acronym> pathname of
specifies the <acronym>Unix</acronym> path name of
the file to be imported as a large object.
</para>
</sect2>
@ -151,9 +151,9 @@ Oid lo_import(PGconn *<replaceable class="parameter">conn</replaceable>, const c
<synopsis>
int lo_export(PGconn *<replaceable class="parameter">conn</replaceable>, Oid <replaceable class="parameter">lobjId</replaceable>, const char *<replaceable class="parameter">filename</replaceable>)
</synopsis>
The lobjId argument specifies the Oid of the large
object to export and the filename argument specifies
the <acronym>UNIX</acronym> pathname of the file.
The <parameter>lobjId</parameter> argument specifies the Oid of the large
object to export and the <parameter>filename</parameter> argument specifies
the <acronym>UNIX</acronym> path name of the file.
</para>
</sect2>
@ -165,9 +165,9 @@ int lo_export(PGconn *<replaceable class="parameter">conn</replaceable>, Oid <re
<synopsis>
int lo_open(PGconn *conn, Oid lobjId, int mode)
</synopsis>
The lobjId argument specifies the Oid of the large
object to open. The mode bits control whether the
object is opened for reading INV_READ), writing or
The <parameter>lobjId</parameter> argument specifies the Oid of the large
object to open. The <parameter>mode</parameter> bits control whether the
object is opened for reading (<symbol>INV_READ</>), writing or
both.
A large object cannot be opened before it is created.
<function>lo_open</function> returns a large object descriptor
@ -185,7 +185,7 @@ int lo_open(PGconn *conn, Oid lobjId, int mode)
<programlisting>
int lo_write(PGconn *conn, int fd, const char *buf, size_t len)
</programlisting>
writes len bytes from buf to large object fd. The fd
writes <parameter>len</parameter> bytes from <parameter>buf</parameter> to large object <parameter>fd</>. The <parameter>fd</parameter>
argument must have been returned by a previous <function>lo_open</function>.
The number of bytes actually written is returned. In
the event of an error, the return value is negative.
@ -200,7 +200,7 @@ int lo_write(PGconn *conn, int fd, const char *buf, size_t len)
<programlisting>
int lo_read(PGconn *conn, int fd, char *buf, size_t len)
</programlisting>
reads len bytes from large object fd into buf. The fd
reads <parameter>len</parameter> bytes from large object <parameter>fd</parameter> into <parameter>buf</parameter>. The <parameter>fd</parameter>
argument must have been returned by a previous <function>lo_open</function>.
The number of bytes actually read is returned. In
the event of an error, the return value is negative.
@ -245,7 +245,7 @@ int lo_close(PGconn *conn, int fd)
<synopsis>
Oid lo_unlink(PGconn *<replaceable class="parameter">conn</replaceable>, Oid lobjId)
</synopsis>
The lobjId argument specifies the Oid of the large
The <parameter>lobjId</parameter> argument specifies the Oid of the large
object to remove.
</para>
</sect2>
@ -278,20 +278,20 @@ SELECT lo_export(image.raster, '/tmp/motd') from image
</sect1>
<sect1 id="lo-libpq">
<title>Accessing Large Objects from LIBPQ</title>
<title>Accessing Large Objects from <application>Libpq</application></title>
<para>
Below is a sample program which shows how the large object
interface
in LIBPQ can be used. Parts of the program are
in <application>libpq</> can be used. Parts of the program are
commented out but are left in the source for the readers
benefit. This program can be found in
<filename>
../src/test/examples
</filename>
Frontend applications which use the large object interface
in LIBPQ should include the header file
libpq/libpq-fs.h and link with the libpq library.
in <application>libpq</application> should include the header file
<filename>libpq/libpq-fs.h</filename> and link with the <application>libpq</application> library.
</para>
</sect1>

View File

@ -1,5 +1,5 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/Attic/odbc.sgml,v 1.22 2001/05/12 22:51:35 petere Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/Attic/odbc.sgml,v 1.23 2001/09/10 21:58:47 petere Exp $
-->
<chapter id="odbc">
@ -53,10 +53,10 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/odbc.sgml,v 1.22 2001/05/12 22:51:35
<para>
The backend access come from <acronym>ODBC</acronym> drivers,
or vendor specifc drivers that
or vendor-specific drivers that
allow data access. <productname>psqlODBC</productname> is such a driver,
along with others that are
available, such as the OpenLink <acronym>ODBC</acronym> drivers.
available, such as the <productname>OpenLink</productname> <acronym>ODBC</acronym> drivers.
</para>
<para>
@ -133,8 +133,8 @@ psql -d template1 -f <replaceable>LOCATION</>/odbc.sql
<title>Supported Platforms</title>
<para>
<productname>psqlODBC</productname> has been built and tested
on <productname>Linux</productname>. There have been reports of success
with FreeBSD and with Solaris. There are no known restrictions
on <systemitem class="osname">Linux</>. There have been reports of success
with <systemitem class="osname">FreeBSD</> and with <systemitem class="osname">Solaris</>. There are no known restrictions
on the basic code for other platforms which already support
<productname>Postgres</productname>.
</para>
@ -248,9 +248,9 @@ InstallDir = /opt/applix/axdata/axshlib
<listitem>
<para>
In Visual C++, you can use the CRecordSet class, which wraps the
In Visual C++, you can use the <classname>CRecordSet</classname> class, which wraps the
<acronym>ODBC</acronym> <acronym>API</acronym>
set within an MFC 4.2 class. This is the easiest route if you are doing
set within an <application>MFC</application> 4.2 class. This is the easiest route if you are doing
Windows C++ development under Windows NT.
</para>
</listitem>
@ -277,7 +277,7 @@ InstallDir = /opt/applix/axdata/axshlib
</para>
<para>
Visual Basic and the other RAD tools have Recordset objects
Visual Basic and the other <acronym>RAD</acronym> tools have <classname>Recordset</classname> objects
that use <acronym>ODBC</acronym>
directly to access data. Using the data-aware controls, you can quickly
link to the <acronym>ODBC</acronym> back end database
@ -291,7 +291,7 @@ InstallDir = /opt/applix/axdata/axshlib
<tip>
<para>
You'll have to set up a DSN first.
You'll have to set up a <acronym>DSN</acronym> first.
</para>
</tip>
@ -299,7 +299,7 @@ InstallDir = /opt/applix/axdata/axshlib
</sect1>
<sect1 id="odbc-applixware">
<title>ApplixWare</title>
<title><application>ApplixWare</application></title>
<indexterm zone="odbc-applixware">
<primary>ApplixWare</primary>
@ -326,7 +326,7 @@ InstallDir = /opt/applix/axdata/axshlib
</para>
<procedure>
<title>Enabling ApplixWare Database Access</title>
<title>Enabling <application>ApplixWare</application> Database Access</title>
<para>
These instructions are for the <literal>4.4.2</literal> release of
@ -341,7 +341,7 @@ InstallDir = /opt/applix/axdata/axshlib
<filename>elfodbc</filename> can
find <filename>libodbc.so</filename>
(the <acronym>ODBC</acronym> driver manager) shared library.
This library is included with the ApplixWare distribution,
This library is included with the <application>ApplixWare</application> distribution,
but <filename>axnet.cnf</filename> needs to be modified to point to the
correct location.
</para>
@ -371,7 +371,7 @@ InstallDir = /opt/applix/axdata/axshlib
libFor elfodbc <replaceable>applixroot</replaceable>/applix/axdata/axshlib/lib
</programlisting>
which will tell elfodbc to look in this directory
which will tell <literal>elfodbc</literal> to look in this directory
for the <acronym>ODBC</acronym> support library.
Typically <productname>Applix</productname> is installed in
<filename>/opt</filename> so the full path would be
@ -399,7 +399,7 @@ TextAsLongVarchar=0
</procedure>
<procedure>
<title>Testing ApplixWare ODBC Connections</title>
<title>Testing <application>ApplixWare</application> ODBC Connections</title>
<step performance="required">
<para>
@ -424,7 +424,7 @@ TextAsLongVarchar=0
Select <acronym>ODBC</acronym>, and click <command>Browse</command>.
The database you configured in <filename>.odbc.ini</filename>
should be shown. Make sure that the <option>Host: field</option>
is empty (if it is not, axnet will try to contact axnet on another machine
is empty (if it is not, <literal>axnet</> will try to contact <literal>axnet</> on another machine
to look for the database).
</para>
</step>
@ -436,14 +436,14 @@ TextAsLongVarchar=0
</step>
<step performance="required">
<para>
Enter username and password in the login identification dialog,
Enter user name and password in the login identification dialog,
and click <command>OK</command>.
</para>
</step>
</substeps>
<para>
You should see "<literal>Starting elfodbc server</literal>"
You should see <literal>Starting elfodbc server</literal>
in the lower left corner of the
data window. If you get an error dialog box, see the debugging section
below.
@ -476,7 +476,7 @@ TextAsLongVarchar=0
<variablelist>
<varlistentry>
<term>
Cannot launch gateway on server
<computeroutput>Cannot launch gateway on server</computeroutput>
</term>
<listitem>
<para>
@ -487,9 +487,9 @@ TextAsLongVarchar=0
</varlistentry>
<varlistentry>
<term>
<term><computeroutput>
Error from ODBC Gateway:
IM003::[iODBC][Driver Manager]Specified driver could not be loaded
IM003::[iODBC][Driver Manager]Specified driver could not be loaded</computeroutput>
</term>
<listitem>
<para>
@ -501,7 +501,7 @@ TextAsLongVarchar=0
<varlistentry>
<term>
Server: Broken Pipe
<computeroutput>Server: Broken Pipe</computeroutput>
</term>
<listitem>
@ -516,15 +516,15 @@ TextAsLongVarchar=0
<varlistentry>
<term>
setuid to 256: failed to launch gateway
<computeroutput>setuid to 256: failed to launch gateway</computeroutput>
</term>
<listitem>
<para>
The September release of ApplixWare v4.4.1 (the first release with official
<acronym>ODBC</acronym> support under Linux) shows problems when usernames
The September release of <application>ApplixWare</application> v4.4.1 (the first release with official
<acronym>ODBC</acronym> support under Linux) shows problems when user names
exceed eight (8) characters in length.
Problem description ontributed by Steve Campbell
Problem description contributed by Steve Campbell
(<email>scampbell@lear.com</email>).
</para>
</listitem>
@ -554,24 +554,24 @@ TextAsLongVarchar=0
</sect2>
<sect2>
<title>Debugging ApplixWare ODBC Connections</title>
<title>Debugging <application>ApplixWare</application> ODBC Connections</title>
<para>
One good tool for debugging connection problems uses the Unix system
utility <application>strace</application>.
</para>
<procedure>
<title>Debugging with strace</title>
<title>Debugging with <command>strace</command></title>
<step performance="required">
<para>
Start applixware.
Start <application>ApplixWare</application>.
</para>
</step>
<step performance="required">
<para>
Start an <application>strace</application> on
the axnet process. For example, if
the <literal>axnet</literal> process. For example, if
<programlisting>
% ps -aucx | grep ax
@ -596,7 +596,7 @@ cary 27883 0.9 31.0 12692 4596 ? S 10:24 0:04 axmain
<step performance="required">
<para>
Check the strace output.
Check the <command>strace</command> output.
</para>
<note>
<title>Note from Cary</title>
@ -613,24 +613,24 @@ cary 27883 0.9 31.0 12692 4596 ? S 10:24 0:04 axmain
<para>
For example, after getting
a "<literal>Cannot launch gateway on server</literal>",
I ran strace on axnet and got
a <literal>Cannot launch gateway on server</literal>,
I ran <command>strace</command> on <literal>axnet</literal> and got
<programlisting>
<programlisting>
[pid 27947] open("/usr/lib/libodbc.so", O_RDONLY) = -1 ENOENT
(No such file or directory)
[pid 27947] open("/lib/libodbc.so", O_RDONLY) = -1 ENOENT
(No such file or directory)
[pid 27947] write(2, "/usr2/applix/axdata/elfodbc:
can't load library 'libodbc.so'\n", 61) = -1 EIO (I/O error)
</programlisting>
So what is happening is that applix elfodbc is searching for libodbc.so, but it
can't find it. That is why axnet.cnf needed to be changed.
</programlisting>
So what is happening is that <literal>applix elfodbc</literal> is searching for <filename>libodbc.so</filename>, but it
cannot find it. That is why <filename>axnet.cnf</filename> needed to be changed.
</para>
</sect2>
<sect2>
<title>Running the ApplixWare Demo</title>
<title>Running the <application>ApplixWare</application> Demo</title>
<para>
In order to go through the
@ -645,7 +645,7 @@ can't load library 'libodbc.so'\n", 61) = -1 EIO (I/O error)
</para>
<procedure>
<title>Modifying the ApplixWare Demo</title>
<title>Modifying the <application>ApplixWare</application> Demo</title>
<step performance="required">
<para>
@ -683,7 +683,7 @@ can't load library 'libodbc.so'\n", 61) = -1 EIO (I/O error)
<step performance="required">
<para>
Open the sqldemo.am file from the <application>Macro Editor</application>.
Open the <filename>sqldemo.am</filename> file from the <application>Macro Editor</application>.
</para>
</step>
@ -734,7 +734,7 @@ can't load library 'libodbc.so'\n", 61) = -1 EIO (I/O error)
<para>
You can add information about your
database login and password to the standard Applix start-up
database login and password to the standard <application>Applix</application> start-up
macro file. This is an example
<filename>~/axhome/macros/login.am</filename> file:
@ -748,7 +748,7 @@ endmacro
<caution>
<para>
You should be careful about the file protections on any file containing
username and password information.
user name and password information.
</para>
</caution>
</para>

View File

@ -1,5 +1,5 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/plperl.sgml,v 2.9 2001/06/22 21:37:14 momjian Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/plperl.sgml,v 2.10 2001/09/10 21:58:47 petere Exp $
-->
<chapter id="plperl">
@ -21,10 +21,10 @@ $Header: /cvsroot/pgsql/doc/src/sgml/plperl.sgml,v 2.9 2001/06/22 21:37:14 momji
<para>
The PL/Perl interpreter (when installed as trusted interpreter with
default name 'plperl') intepreter is a full Perl interpreter. However, certain
default name <literal>plperl</>) interpreter is a full Perl interpreter. However, certain
operations have been disabled in order to maintain the security of
the system. In general, the operations that are restricted are
those that interact with the environment. This includes filehandle
those that interact with the environment. This includes file handle
operations, <literal>require</literal>, and <literal>use</literal>
(for external modules). It should be noted that this security is
not absolute. Indeed, several Denial-of-Service attacks are still
@ -32,8 +32,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/plperl.sgml,v 2.9 2001/06/22 21:37:14 momji
</para>
<para>
When PL/Perl is installed as 'untrusted' interpreter (with name 'plperlu'),
everything is permitted, and any perl code can be loaded (by superuser only).
When PL/Perl is installed as <quote>untrusted</> interpreter (with name <literal>plperlu</literal>),
everything is permitted, and any Perl code can be loaded (by superuser only).
</para>
<sect1 id="plperl-install">
@ -170,13 +170,13 @@ CREATE FUNCTION badfunc() RETURNS integer AS '
The creation of the function will succeed, but executing it will not.
Note that if same function was created by superuser using language
'plperlu', execution would succeed.
<literal>plperlu</>, execution would succeed.
</para>
<para>
Access to database itself from your perl function can be done via
an experimental module DBD::PgSPI, available at <ulink url="http://www.formenos.org/PgSPI/">this site</ulink>. This module makes available a DBI-compliant
database-handle named $pg_dbh, and you can use that to make queries with
normal DBI syntax.
Access to database itself from your Perl function can be done via
an experimental module <ulink url="http://www.formenos.org/PgSPI/"><literal>DBD::PgSPI</literal></ulink>. This module makes available a <acronym>DBI</>-compliant
database-handle named <varname>$pg_dbh</varname>, and you can use that to make queries with
normal <acronym>DBI</> syntax.
</para>
</sect1>

View File

@ -1,4 +1,4 @@
<!-- $Header: /cvsroot/pgsql/doc/src/sgml/plpython.sgml,v 1.1 2001/05/12 17:49:32 petere Exp $ -->
<!-- $Header: /cvsroot/pgsql/doc/src/sgml/plpython.sgml,v 1.2 2001/09/10 21:58:47 petere Exp $ -->
<chapter id="plpython">
<title>PL/Python - Python Procedural Language</title>
@ -49,9 +49,9 @@ def __plpython_procedure_myfunc_23456():
<para>
PostgreSQL function variables are available in the global
<varname>args</varname> list. In the <function>myfunc</function>
example, args[0] contains whatever was passed in as the text
argument. For <literal>myfunc2(text, int4)</literal>, args[0]
would contain the text variable and args[1] the int4 variable.
example, <varname>args[0]</> contains whatever was passed in as the text
argument. For <literal>myfunc2(text, integer)</literal>, <varname>args[0]</>
would contain the <type>text</type> variable and <varname>args[1]</varname> the <type>integer</type> variable.
</para>
<para>
@ -59,12 +59,12 @@ def __plpython_procedure_myfunc_23456():
function calls. This variable is private static data. The global
dictionary GD is public data, available to all python functions
within a backend. Use with care. When the function is used in a
trigger, the triggers tuples are in TD["new"] and/or TD["old"]
trigger, the triggers tuples are in <literal>TD["new"]</literal> and/or <literal>TD["old"]</literal>
depending on the trigger event. Return 'None' or "OK" from the
python function to indicate the tuple is unmodified, "SKIP" to
abort the event, or "MODIFIED" to indicate you've modified the
tuple. If the trigger was called with arguments they are available
in TD["args"][0] to TD["args"][(n -1)]
in <literal>TD["args"][0] to TD["args"][(n -1)]</literal>.
</para>
<para>
@ -98,7 +98,7 @@ def __plpython_procedure_myfunc_23456():
</para>
<para>
Additionally, the plpy module provides two functions called
Additionally, the <literal>plpy</literal> module provides two functions called
<function>execute</function> and <function>prepare</function>.
Calling <function>plpy.execute</function> with a query string, and
an optional limit argument, causes that query to be run, and the

View File

@ -1,16 +1,16 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/Attic/plsql.sgml,v 2.37 2001/09/10 06:35:34 ishii Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/Attic/plsql.sgml,v 2.38 2001/09/10 21:58:47 petere Exp $
-->
<chapter id="plpgsql">
<title>PL/pgSQL - <acronym>SQL</acronym> Procedural Language</title>
<title><application>PL/pgSQL</application> - <acronym>SQL</acronym> Procedural Language</title>
<indexterm zone="plpgsql">
<primary>PL/pgSQL</primary>
</indexterm>
<para>
PL/pgSQL is a loadable procedural language for the
<application>PL/pgSQL</application> is a loadable procedural language for the
<productname>Postgres</productname> database system.
</para>
@ -24,7 +24,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/plsql.sgml,v 2.37 2001/09/10 06:35:34
<title>Overview</title>
<para>
The design goals of PL/pgSQL were to create a loadable procedural
The design goals of <application>PL/pgSQL</> were to create a loadable procedural
language that
<itemizedlist>
<listitem>
@ -60,7 +60,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/plsql.sgml,v 2.37 2001/09/10 06:35:34
</itemizedlist>
</para>
<para>
The PL/pgSQL call handler parses the function's source text and
The <application>PL/pgSQL</> call handler parses the function's source text and
produces an internal binary instruction tree the first time the
function is called. The produced bytecode is identified
in the call handler by the object ID of the function. This ensures
@ -69,12 +69,12 @@ $Header: /cvsroot/pgsql/doc/src/sgml/Attic/plsql.sgml,v 2.37 2001/09/10 06:35:34
</para>
<para>
For all expressions and <acronym>SQL</acronym> statements used in
the function, the PL/pgSQL bytecode interpreter creates a
the function, the <application>PL/pgSQL</> bytecode interpreter creates a
prepared execution plan using the <acronym>SPI</acronym> manager's
<function>SPI_prepare()</function> and
<function>SPI_saveplan()</function> functions. This is done the
first time the individual
statement is processed in the PL/pgSQL function. Thus, a function with
statement is processed in the <application>PL/pgSQL</> function. Thus, a function with
conditional code that contains many statements for which execution
plans would be required, will only prepare and save those plans
that are really used during the lifetime of the database
@ -102,18 +102,18 @@ END;
</para>
<para>
Because PL/pgSQL saves execution plans in this way, queries that appear
directly in a PL/pgSQL function must refer to the same tables and fields
Because <application>PL/pgSQL</application> saves execution plans in this way, queries that appear
directly in a <application>PL/pgSQL</application> function must refer to the same tables and fields
on every execution; that is, you cannot use a parameter as the name of
a table or field in a query. To get around
this restriction, you can construct dynamic queries using the PL/pgSQL
this restriction, you can construct dynamic queries using the <application>PL/pgSQL</application>
EXECUTE statement --- at the price of constructing a new query plan
on every execution.
</para>
<para>
Except for input/output conversion and calculation functions
for user defined types, anything that can be defined in C language
functions can also be done with PL/pgSQL. It is possible to
functions can also be done with <application>PL/pgSQL</application>. It is possible to
create complex conditional computation functions and later use
them to define operators or use them in functional indexes.
</para>
@ -162,13 +162,13 @@ END;
</para>
<para>
With PL/pgSQL you can group a block of computation and a
With <application>PL/pgSQL</application> you can group a block of computation and a
series of queries <emphasis>inside</emphasis> the
database server, thus having the power of a procedural
language and the ease of use of SQL, but saving lots of
time because you don't have the whole client/server
communication overhead. Your application will enjoy a
considerable performance increase by using PL/pgSQL.
considerable performance increase by using <application>PL/pgSQL</application>.
</para>
</sect3>
@ -176,9 +176,9 @@ END;
<title>SQL Support</title>
<para>
PL/pgSQL adds the power of a procedural language to the
<application>PL/pgSQL</application> adds the power of a procedural language to the
flexibility and ease of <acronym>SQL</acronym>. With
PL/pgSQL you can use all the datatypes, columns, operators
<application>PL/pgSQL</application> you can use all the data types, columns, operators
and functions of SQL.
</para>
</sect3>
@ -187,7 +187,7 @@ END;
<title>Portability</title>
<para>
Because PL/pgSQL functions run inside PostgreSQL, these
Because <application>PL/pgSQL</application> functions run inside PostgreSQL, these
functions will run on any platform where PostgreSQL
runs. Thus you can reuse code and have less development costs.
</para>
@ -195,13 +195,13 @@ END;
</sect2>
<sect2 id="plpgsql-overview-developing-in-plpgsql">
<title>Developing in PL/pgSQL</title>
<title>Developing in <application>PL/pgSQL</application></title>
<para>
Developing in PL/pgSQL is pretty straight forward, especially
Developing in <application>PL/pgSQL</application> is pretty straight forward, especially
if you have developed in other database procedural languages,
such as Oracle's PL/SQL. Two good ways of developing in
PL/pgSQL are:
such as Oracle's <application>PL/SQL</application>. Two good ways of developing in
<application>PL/pgSQL</application> are:
<itemizedlist>
<listitem>
@ -212,18 +212,18 @@ END;
<listitem>
<para>
Using PostgreSQL's GUI Tool: pgaccess
Using PostgreSQL's GUI Tool: <application>PgAccess</>
</para>
</listitem>
</itemizedlist>
</para>
<para>
One good way to develop in PL/pgSQL is to simply use the text
One good way to develop in <application>PL/pgSQL</> is to simply use the text
editor of your choice to create your functions, and in another
console, use <command>psql</command> (PostgreSQL's interactive monitor) to load
those functions. If you are doing it this way (and if you are
a PL/pgSQL novice or in debugging stage), it is a good idea to
a <application>PL/pgSQL</> novice or in debugging stage), it is a good idea to
always <command>DROP</command> your function before creating it. That way
when you reload the file, it'll drop your functions and then
re-create them. For example:
@ -240,15 +240,15 @@ end;
When you load the file for the first time,
<productname>PostgreSQL</> will raise a warning saying this
function doesn't exist and go on to create it. To load an SQL
file (filename.sql) into a database named "dbname", use the command:
file (e.g., <filename>filename.sql</filename>) into a database named <literal>dbname</literal>, use the command:
<programlisting>
psql -f filename.sql dbname
</programlisting>
</para>
<para>
Another good way to develop in PL/pgSQL is using
<productname>PostgreSQL</>'s GUI tool: pgaccess. It does some
Another good way to develop in <application>PL/pgSQL</> is using
<productname>PostgreSQL</>'s GUI tool: <application>PgAccess</>. It does some
nice things for you, like escaping single-quotes, and making
it easy to recreate and debug functions.
</para>
@ -263,10 +263,10 @@ psql -f filename.sql dbname
<!-- **** PL/pgSQL structure **** -->
<sect2>
<title>Structure of PL/pgSQL</title>
<title>Structure of <application>PL/pgSQL</application></title>
<para>
PL/pgSQL is a <emphasis>block structured</emphasis> language. All
<application>PL/pgSQL</application> is a <emphasis>block structured</emphasis> language. All
keywords and identifiers can be used in mixed upper and
lower-case. A block is defined as:
@ -316,8 +316,8 @@ END;
<para>
It is important not to confuse the use of BEGIN/END for
grouping statements in PL/pgSQL with the database commands for
transaction control. PL/pgSQL's BEGIN/END are only for grouping;
grouping statements in <application>PL/pgSQL</> with the database commands for
transaction control. <application>PL/pgSQL</>'s BEGIN/END are only for grouping;
they do not start or end a transaction. Functions and trigger procedures
are always executed within a transaction established by an outer query
--- they cannot start or commit transactions, since
@ -329,7 +329,7 @@ END;
<title>Comments</title>
<para>
There are two types of comments in PL/pgSQL. A double dash <literal>--</literal>
There are two types of comments in <application>PL/pgSQL</>. A double dash <literal>--</literal>
starts a comment that extends to the end of the line. A <literal>/*</literal>
starts a block comment that extends to the next occurrence of <literal>*/</literal>.
Block comments cannot be nested, but double dash comments can be
@ -350,7 +350,7 @@ END;
</para>
<para>
PL/pgSQL variables can have any SQL datatype, such as
<application>PL/pgSQL</> variables can have any SQL data type, such as
<type>INTEGER</type>, <type>VARCHAR</type> and
<type>CHAR</type>. All variables have as default value the
<acronym>SQL</acronym> NULL value.
@ -437,7 +437,7 @@ END;
<para>
Using the <type>%TYPE</type> and <type>%ROWTYPE</type>
attributes, you can declare variables with the same
datatype or structure of another database item (e.g: a
data type or structure of another database item (e.g: a
table field).
</para>
@ -448,13 +448,13 @@ END;
</term>
<listitem>
<para>
<type>%TYPE</type> provides the datatype of a
<type>%TYPE</type> provides the data type of a
variable or database column. You can use this to
declare variables that will hold database
values. For example, let's say you have a column
named <type>user_id</type> in your
<type>users</type> table. To declare a variable with
the same datatype as users.user_id you write:
the same data type as <structname>users</>.<structfield>user_id</> you write:
<programlisting>
user_id users.user_id%TYPE;
</programlisting>
@ -462,8 +462,8 @@ user_id users.user_id%TYPE;
<para>
By using <type>%TYPE</type> you don't need to know
the datatype of the structure you are referencing,
and most important, if the datatype of the
the data type of the structure you are referencing,
and most important, if the data type of the
referenced item changes in the future (e.g: you
change your table definition of user_id to become a
REAL), you won't need to change your function
@ -474,11 +474,11 @@ user_id users.user_id%TYPE;
<varlistentry>
<term>
<replaceable>table</replaceable>%ROWTYPE
<literal><replaceable>table</replaceable>%ROWTYPE</literal>
</term>
<listitem>
<para>
<type>%ROWTYPE</type> provides the composite datatype corresponding
<type>%ROWTYPE</type> provides the composite data type corresponding
to a whole row of the specified table.
<replaceable>table</replaceable> must be an existing
table or view name of the database. The fields of the row are
@ -560,12 +560,12 @@ RENAME this_var TO that_var;
<title>Expressions</title>
<para>
All expressions used in PL/pgSQL statements are processed using
the backend's executor. Expressions that appear to contain
All expressions used in <application>PL/pgSQL</application> statements are processed using
the executor of the server. Expressions that appear to contain
constants may in fact require run-time evaluation
(e.g. <literal>'now'</literal> for the
<type>timestamp</type> type) so
it is impossible for the PL/pgSQL parser
it is impossible for the <application>PL/pgSQL</application> parser
to identify real constant values other than the NULL keyword. All
expressions are evaluated internally by executing a query
<synopsis>
@ -574,7 +574,7 @@ SELECT <replaceable>expression</replaceable>
using the <acronym>SPI</acronym> manager. In the expression, occurrences of variable
identifiers are substituted by parameters and the actual values from
the variables are passed to the executor in the parameter array. All
expressions used in a PL/pgSQL function are only prepared and
expressions used in a <application>PL/pgSQL</application> function are only prepared and
saved once. The only exception to this rule is an EXECUTE statement
if parsing of a query is needed each time it is encountered.
</para>
@ -615,7 +615,7 @@ CREATE FUNCTION logfunc2 (text) RETURNS timestamp AS '
<productname>Postgres</productname> main parser knows when
preparing the plan for the INSERT, that the string
<literal>'now'</literal> should be interpreted as
<type>timestamp</type> because the target field of logtable
<type>timestamp</type> because the target field of <classname>logtable</classname>
is of that type. Thus, it will make a constant from it at this
time and this constant value is then used in all invocations of
<function>logfunc1()</function> during the lifetime of the
@ -629,19 +629,19 @@ CREATE FUNCTION logfunc2 (text) RETURNS timestamp AS '
what type <literal>'now'</literal> should become and therefore
it returns a data type of <type>text</type> containing the string
<literal>'now'</literal>. During the assignment
to the local variable curtime, the PL/pgSQL interpreter casts this
string to the timestamp type by calling the
to the local variable <varname>curtime</varname>, the <application>PL/pgSQL</application> interpreter casts this
string to the <type>timestamp</type> type by calling the
<function>text_out()</function> and <function>timestamp_in()</function>
functions for the conversion.
</para>
<para>
This type checking done by the <productname>Postgres</productname> main
parser got implemented after PL/pgSQL was nearly done.
parser got implemented after <application>PL/pgSQL</application> was nearly done.
It is a difference between 6.3 and 6.4 and affects all functions
using the prepared plan feature of the <acronym>SPI</acronym> manager.
Using a local
variable in the above manner is currently the only way in PL/pgSQL to get
variable in the above manner is currently the only way in <application>PL/pgSQL</application> to get
those values interpreted correctly.
</para>
@ -659,7 +659,7 @@ CREATE FUNCTION logfunc2 (text) RETURNS timestamp AS '
<title>Statements</title>
<para>
Anything not understood by the PL/pgSQL parser as specified below
Anything not understood by the <application>PL/pgSQL</application> parser as specified below
will be put into a query and sent down to the database engine
to execute. The resulting query should not return any data.
</para>
@ -675,8 +675,8 @@ CREATE FUNCTION logfunc2 (text) RETURNS timestamp AS '
If the expressions result data type doesn't match the variables
data type, or the variable has a size/precision that is known
(as for <type>char(20)</type>), the result value will be implicitly casted by
the PL/pgSQL bytecode interpreter using the result types output- and
(as for <type>char(20)</type>), the result value will be implicitly cast by
the <application>PL/pgSQL</application> bytecode interpreter using the result types output- and
the variables type input-functions. Note that this could potentially
result in runtime errors generated by the types input functions.
</para>
@ -694,7 +694,7 @@ tax := subtotal * 0.06;
All functions defined in a <productname>Postgres</productname>
database return a value. Thus, the normal way to call a function
is to execute a SELECT query or doing an assignment (resulting
in a PL/pgSQL internal SELECT).
in a <application>PL/pgSQL</application> internal SELECT).
</para>
<para>
@ -722,8 +722,8 @@ PERFORM create_mv(''cs_session_page_requests_mv'',''
<para>
Often times you will want to generate dynamic queries inside
your PL/pgSQL functions. Or you have functions that will
generate other functions. PL/pgSQL provides the EXECUTE
your <application>PL/pgSQL</application> functions. Or you have functions that will
generate other functions. <application>PL/pgSQL</application> provides the EXECUTE
statement for these occasions.
</para>
@ -738,13 +738,13 @@ EXECUTE <replaceable class="command">query-string</replaceable>
<para>
When working with dynamic queries you will have to face
escaping of single quotes in PL/pgSQL. Please refer to the
escaping of single quotes in <application>PL/pgSQL</>. Please refer to the
table available at the "Porting from Oracle PL/SQL" chapter
for a detailed explanation that will save you some effort.
</para>
<para>
Unlike all other queries in PL/pgSQL, a
Unlike all other queries in <application>PL/pgSQL</>, a
<replaceable>query</replaceable> run by an EXECUTE statement is
not prepared and saved just once during the life of the server.
Instead, the <replaceable>query</replaceable> is prepared each
@ -834,7 +834,7 @@ GET DIAGNOSTICS <replaceable>variable</replaceable> = <replaceable>item</replace
This command allows retrieval of system status indicators. Each
<replaceable>item</replaceable> is a keyword identifying a state
value to be assigned to the specified variable (which should be
of the right datatype to receive it). The currently available
of the right data type to receive it). The currently available
status items are <varname>ROW_COUNT</>, the number of rows
processed by the last <acronym>SQL</acronym> query sent down to
the <acronym>SQL</acronym> engine; and <varname>RESULT_OID</>,
@ -873,7 +873,7 @@ RETURN <replaceable>expression</replaceable>
<title>Control Structures</title>
<para>
Control structures are probably the most useful (and
important) part of PL/SQL. With PL/pgSQL's control structures,
important) part of PL/SQL. With <application>PL/pgSQL</>'s control structures,
you can manipulate <productname>PostgreSQL</> data in a very
flexible and powerful way.
</para>
@ -1022,8 +1022,8 @@ END IF;
<para>
With the LOOP, WHILE, FOR and EXIT statements, you can
control the flow of execution of your PL/pgSQL program
iteratively.
control the flow of execution of your <application>PL/pgSQL</application> program
iteractively.
</para>
<variablelist>
@ -1208,7 +1208,7 @@ SELECT INTO <replaceable>target</replaceable> <replaceable>expressions</replace
record-/row-fields. Note that this is quite different from
Postgres' normal interpretation of SELECT INTO, which is that the
INTO target is a newly created table. (If you want to create a
table from a SELECT result inside a PL/pgSQL function, use the
table from a SELECT result inside a <application>PL/pgSQL</application> function, use the
equivalent syntax <command>CREATE TABLE AS SELECT</command>.)
</para>
@ -1344,7 +1344,7 @@ END LOOP;
<para>
Use the RAISE statement to throw messages into the
<productname>Postgres</productname> elog mechanism.
<productname>Postgres</productname> <function>elog</function> mechanism.
<synopsis>
RAISE <replaceable class="parameter">level</replaceable> '<replaceable class="parameter">format</replaceable>' <optional>, <replaceable class="parameter">variable</replaceable> <optional>...</optional></optional>;
@ -1407,7 +1407,7 @@ RAISE EXCEPTION ''Inexistent ID --> %'',user_id;
</para>
<para>
Thus, the only thing PL/pgSQL currently does when it encounters
Thus, the only thing <application>PL/pgSQL</application> currently does when it encounters
an abort during execution of a function or trigger
procedure is to write some additional NOTICE level log messages
telling in which function and where (line number and type of
@ -1423,7 +1423,7 @@ RAISE EXCEPTION ''Inexistent ID --> %'',user_id;
<title>Trigger Procedures</title>
<para>
PL/pgSQL can be used to define trigger procedures. They are created
<application>PL/pgSQL</application> can be used to define trigger procedures. They are created
with the usual <command>CREATE FUNCTION</command> command as a function with no
arguments and a return type of <type>OPAQUE</type>.
</para>
@ -1559,7 +1559,7 @@ RAISE EXCEPTION ''Inexistent ID --> %'',user_id;
</para>
<example>
<title>A PL/pgSQL Trigger Procedure Example</title>
<title>A <application>PL/pgSQL</application> Trigger Procedure Example</title>
<para>
This trigger ensures, that any time a row is inserted or updated
@ -1611,13 +1611,13 @@ CREATE TRIGGER emp_stamp BEFORE INSERT OR UPDATE ON emp
<para>
Here are only a few functions to demonstrate how easy it is to
write PL/pgSQL
write <application>PL/pgSQL</>
functions. For more complex examples the programmer
might look at the regression test for PL/pgSQL.
might look at the regression test for <application>PL/pgSQL</>.
</para>
<para>
One painful detail in writing functions in PL/pgSQL is the handling
One painful detail in writing functions in <application>PL/pgSQL</application> is the handling
of single quotes. The function's source text on <command>CREATE FUNCTION</command> must
be a literal string. Single quotes inside of literal strings must be
either doubled or quoted with a backslash. We are still looking for
@ -1633,10 +1633,10 @@ CREATE TRIGGER emp_stamp BEFORE INSERT OR UPDATE ON emp
</para>
<example>
<title>A Simple PL/pgSQL Function to Increment an Integer</title>
<title>A Simple <application>PL/pgSQL</application> Function to Increment an Integer</title>
<para>
The following two PL/pgSQL functions are identical to their
The following two <application>PL/pgSQL</application> functions are identical to their
counterparts from the C language function discussion. This
function receives an <type>integer</type> and increments it by
one, returning the incremented value.
@ -1652,7 +1652,7 @@ CREATE FUNCTION add_one (integer) RETURNS integer AS '
</example>
<example>
<title>A Simple PL/pgSQL Function to Concatenate Text</title>
<title>A Simple <application>PL/pgSQL</application> Function to Concatenate Text</title>
<para>
This function receives two <type>text</type> parameters and
@ -1669,16 +1669,16 @@ CREATE FUNCTION concat_text (text, text) RETURNS text AS '
</example>
<example>
<title>A PL/pgSQL Function on Composite Type</title>
<title>A <application>PL/pgSQL</application> Function on Composite Type</title>
<para>
In this example, we take EMP (a table) and an
In this example, we take <literal>EMP</> (a table) and an
<type>integer</type> as arguments to our function, which returns
a <type>boolean</type>. If the "salary" field of the EMP table is
<literal>NULL</literal>, we return "f". Otherwise we compare with
a <type>boolean</type>. If the <structfield>salary</> field of the <structname>EMP</> table is
<literal>NULL</literal>, we return <literal>f</>. Otherwise we compare with
that field with the <type>integer</type> passed to the function
and return the <type>boolean</type> result of the comparison (t
or f). This is the PL/pgSQL equivalent to the example from the C
or f). This is the <application>PL/pgSQL</application> equivalent to the example from the C
functions.
</para>
@ -1743,7 +1743,7 @@ CREATE FUNCTION c_overpaid (EMP, integer) RETURNS boolean AS '
<para>
This section explains differences between Oracle's PL/SQL and
PostgreSQL's PL/pgSQL languages in the hopes of helping developers
PostgreSQL's <application>PL/pgSQL</application> languages in the hopes of helping developers
port applications from Oracle to PostgreSQL. Most of the code here
is from the <ulink url="http://www.arsdigita.com">ArsDigita</ulink>
<ulink url="http://www.arsdigita.com/asj/clickstream">Clickstream
@ -1753,10 +1753,10 @@ CREATE FUNCTION c_overpaid (EMP, integer) RETURNS boolean AS '
</para>
<para>
PL/pgSQL is similar to PL/SQL in many aspects. It is a block
<application>PL/pgSQL</application> is similar to PL/SQL in many aspects. It is a block
structured, imperative language (all variables have to be
declared). PL/SQL has many more features than its PostgreSQL
counterpart, but PL/pgSQL allows for a great deal of functionality
counterpart, but <application>PL/pgSQL</application> allows for a great deal of functionality
and it is being improved constantly.
</para>
@ -1937,7 +1937,7 @@ SHOW ERRORS;
</para>
<para>
Let's go through this function and see the differences to PL/pgSQL:
Let's go through this function and see the differences to <application>PL/pgSQL</>:
<itemizedlist>
<listitem>
@ -2100,7 +2100,7 @@ end;
<para>
The following Oracle PL/SQL procedure is used to parse a URL and
return several elements (host, path and query). It is an
procedure because in PL/pgSQL functions only one value can be returned
procedure because in <application>PL/pgSQL</application> functions only one value can be returned
(see <xref linkend="plpgsql-porting-procedures">). In
PostgreSQL, one way to work around this is to split the procedure
in three different functions: one to return the host, another for
@ -2203,7 +2203,7 @@ end;
<para>
Oracle procedures give a little more flexibility to the developer
because nothing needs to be explicitly returned, but it can be
through the use of INOUT or OUT parameters.
through the use of <literal>INOUT</> or <literal>OUT</> parameters.
</para>
<para>
@ -2253,14 +2253,14 @@ show errors
<callout arearefs="co.plpgsql-porting-locktable">
<para>
If you do a <literal>LOCK TABLE</literal> in PL/pgSQL, the lock
If you do a <literal>LOCK TABLE</literal> in <application>PL/pgSQL</>, the lock
will not be released until the calling transaction is finished.
</para>
</callout>
<callout arearefs="co.plpgsql-porting-commit">
<para>
You also cannot have transactions in PL/pgSQL procedures. The
You also cannot have transactions in <application>PL/pgSQL</application> procedures. The
entire function (and other functions called from therein) is
executed in a transaction and PostgreSQL rolls back the results if
something goes wrong. Therefore only one
@ -2278,7 +2278,7 @@ show errors
</para>
<para>
So let's see one of the ways we could port this procedure to PL/pgSQL:
So let's see one of the ways we could port this procedure to <application>PL/pgSQL</>:
<programlisting>
drop function cs_create_job(integer);
@ -2316,7 +2316,7 @@ end;
<calloutlist>
<callout arearefs="co.plpgsql-porting-raise">
<para>
Notice how you can raise notices (or errors) in PL/pgSQL.
Notice how you can raise notices (or errors) in <application>PL/pgSQL</>.
</para>
</callout>
</calloutlist>
@ -2423,7 +2423,7 @@ END;
</sect3>
<sect3 id="plpgsql-porting-optimization">
<title>Optimizing PL/pgSQL Functions</title>
<title>Optimizing <application>PL/pgSQL</application> Functions</title>
<para>
PostgreSQL gives you two function creation modifiers to optimize

View File

@ -1,9 +1,9 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/pltcl.sgml,v 2.12 2001/08/02 15:45:55 momjian Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/pltcl.sgml,v 2.13 2001/09/10 21:58:47 petere Exp $
-->
<chapter id="pltcl">
<title>PL/Tcl - TCL Procedural Language</title>
<title>PL/Tcl - Tcl Procedural Language</title>
<indexterm zone="pltcl">
<primary>PL/Tcl</primary>
@ -37,7 +37,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/pltcl.sgml,v 2.12 2001/08/02 15:45:55 momji
The good restriction is that everything is executed in a safe
Tcl interpreter. In addition to the limited command set of safe Tcl, only
a few commands are available to access the database via SPI and to raise
messages via elog(). There is no way to access internals of the
messages via <function>elog()</>. There is no way to access internals of the
database backend or to gain OS-level access under the permissions of the
<productname>Postgres</productname> user ID, as a C function can do.
Thus, any unprivileged database user may be
@ -157,7 +157,7 @@ CREATE FUNCTION overpaid_2 (EMP) RETURNS bool AS '
</para>
<para>
To help protect PL/Tcl procedures from unwanted side effects,
an array is made available to each procedure via the upvar
an array is made available to each procedure via the <function>upvar</>
command. The global name of this variable is the procedure's internal
name and the local name is GD. It is recommended that GD be used
for private status data of a procedure. Use regular Tcl global variables
@ -210,7 +210,7 @@ CREATE FUNCTION overpaid_2 (EMP) RETURNS bool AS '
<listitem>
<para>
A Tcl list of the tables field names prefixed with an empty list element.
So looking up an element name in the list with the lsearch Tcl command
So looking up an element name in the list with the <function>lsearch</> Tcl command
returns the same positive number starting from 1 as the fields are numbered
in the pg_attribute system catalog.
</para>
@ -344,7 +344,7 @@ CREATE TRIGGER trig_mytab_modcount BEFORE INSERT OR UPDATE ON mytab
<indexterm>
<primary>elog</primary>
</indexterm>
<term>elog <replaceable>level</replaceable> <replaceable>msg</replaceable></term>
<term><function>elog</> <replaceable>level</replaceable> <replaceable>msg</replaceable></term>
<listitem>
<para>
Fire a log message. Possible levels are NOTICE, ERROR,
@ -355,7 +355,7 @@ CREATE TRIGGER trig_mytab_modcount BEFORE INSERT OR UPDATE ON mytab
</varlistentry>
<varlistentry>
<term>quote <replaceable>string</replaceable></term>
<term><function>quote</> <replaceable>string</replaceable></term>
<listitem>
<para>
Duplicates all occurrences of single quote and backslash characters.
@ -398,7 +398,7 @@ CREATE TRIGGER trig_mytab_modcount BEFORE INSERT OR UPDATE ON mytab
<indexterm>
<primary>spi_lastoid</primary>
</indexterm>
<term>spi_lastoid</term>
<term><function>spi_lastoid</></term>
<listitem>
<para>
Returns the OID of the last query if it was an INSERT.
@ -407,7 +407,7 @@ CREATE TRIGGER trig_mytab_modcount BEFORE INSERT OR UPDATE ON mytab
</varlistentry>
<varlistentry>
<term>spi_exec ?-count <replaceable>n</replaceable>? ?-array <replaceable>name</replaceable>? <replaceable>query</replaceable> ?<replaceable>loop-body</replaceable>?</term>
<term><function>spi_exec</function> <literal>?-count <replaceable>n</replaceable>? ?-array <replaceable>name</replaceable>? <replaceable>query</replaceable> ?<replaceable>loop-body</replaceable>?</literal></term>
<listitem>
<para>
Call parser/planner/optimizer/executor for query.
@ -446,45 +446,45 @@ spi_exec -array C "SELECT * FROM pg_class" {
</varlistentry>
<varlistentry>
<term>spi_prepare <replaceable>query</replaceable> <replaceable>typelist</replaceable></term>
<term><function>spi_prepare</function> <replaceable>query</replaceable> <replaceable>typelist</replaceable></term>
<listitem>
<para>
Prepares AND SAVES a query plan for later execution. It is a bit different
from the C level SPI_prepare in that the plan is automatically copied to the
toplevel memory context. Thus, there is currently no way of preparing a
top-level memory context. Thus, there is currently no way of preparing a
plan without saving it.
</para>
<para>
If the query references arguments, the type names must be given as a Tcl
list. The return value from spi_prepare is a query ID to be used in
subsequent calls to spi_execp. See spi_execp for a sample.
list. The return value from <function>spi_prepare</function> is a query ID to be used in
subsequent calls to <function>spi_execp</function>. See <function>spi_execp</function> for a sample.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>spi_exec ?-count <replaceable>n</replaceable>? ?-array<replaceable>name</replaceable>? ?-nulls<replaceable>string</replaceable>? <replaceable>queryid</replaceable> ?<replaceable>value-list</replaceable>? ?<replaceable>loop-body</replaceable>?</term>
<term><function>spi_exec</> <literal>?-count <replaceable>n</replaceable>? ?-array<replaceable>name</replaceable>? ?-nulls<replaceable>string</replaceable>? <replaceable>queryid</replaceable> ?<replaceable>value-list</replaceable>? ?<replaceable>loop-body</replaceable>?</literal></term>
<listitem>
<para>
Execute a prepared plan from spi_prepare with variable substitution.
The optional -count value tells spi_execp the maximum number of rows
Execute a prepared plan from <function>spi_prepare</> with variable substitution.
The optional <literal>-count</literal> value tells <function>spi_execp</> the maximum number of rows
to be processed by the query.
</para>
<para>
The optional value for -nulls is a string of spaces and 'n' characters
telling spi_execp which of the values are NULL's. If given, it must
The optional value for <literal>-nulls</> is a string of spaces and 'n' characters
telling <function>spi_execp</function> which of the values are NULL's. If given, it must
have exactly the length of the number of values.
</para>
<para>
The queryid is the ID returned by the spi_prepare call.
The <parameter>queryid</> is the ID returned by the <function>spi_prepare</function> call.
</para>
<para>
If there was a typelist given to spi_prepare, a Tcl list of values of
If there was a <parameter>typelist</> given to <function>spi_prepare</function>, a Tcl list of values of
exactly the same length must be given to spi_execp after the query. If
the type list on spi_prepare was empty, this argument must be omitted.
</para>
<para>
If the query is a SELECT statement, the same as described for spi_exec
If the query is a SELECT statement, the same as described for <function>spi_exec</>
happens for the loop-body and the variables for the fields selected.
</para>
<para>
@ -506,7 +506,7 @@ CREATE FUNCTION t1_count(int4, int4) RETURNS int4 AS '
Note that each backslash that Tcl should see must be doubled in
the query creating the function, since the main parser processes
backslashes too on CREATE FUNCTION.
Inside the query string given to spi_prepare should
Inside the query string given to <function>spi_prepare</> should
really be dollar signs to mark the parameter positions and to not let
$1 be substituted by the value given in the first function call.
</para>
@ -515,18 +515,20 @@ CREATE FUNCTION t1_count(int4, int4) RETURNS int4 AS '
<varlistentry>
<term>
Modules and the unknown command
Modules and the <function>unknown</> command
</term>
<listitem>
<para>
PL/Tcl has a special support for things often used. It recognizes two
magic tables, pltcl_modules and pltcl_modfuncs.
If these exist, the module 'unknown' is loaded into the interpreter
right after creation. Whenever an unknown Tcl procedure is called,
the unknown proc is asked to check if the procedure is defined in one
of the modules. If this is true, the module is loaded on demand.
To enable this behavior, the PL/Tcl call handler must be compiled
with -DPLTCL_UNKNOWN_SUPPORT set.
PL/Tcl has a special support for things often used. It
recognizes two magic tables, <literal>pltcl_modules</> and
<literal>pltcl_modfuncs</>. If these exist, the module
'unknown' is loaded into the interpreter right after
creation. Whenever an unknown Tcl procedure is called, the
unknown proc is asked to check if the procedure is defined in
one of the modules. If this is true, the module is loaded on
demand. To enable this behavior, the
<productname>PostgreSQL</> must be configured with the option
<option>--enable-pltcl-unknown</option>.
</para>
<para>
There are support scripts to maintain these tables in the modules

View File

@ -1,4 +1,4 @@
<!-- $Header: /cvsroot/pgsql/doc/src/sgml/rules.sgml,v 1.14 2001/07/09 23:50:32 tgl Exp $ -->
<!-- $Header: /cvsroot/pgsql/doc/src/sgml/rules.sgml,v 1.15 2001/09/10 21:58:47 petere Exp $ -->
<Chapter Id="rules">
<Title>The <ProductName>Postgres</ProductName> Rule System</Title>
@ -43,7 +43,7 @@
[<XRef LinkEnd="STON90b" EndTerm="STON90b">].
</para>
<Sect1 id="querytree">
<Title>What is a Querytree?</Title>
<Title>What is a Query Tree?</Title>
<Para>
To understand how the rule system works it is necessary to know
@ -52,32 +52,32 @@
<Para>
The rule system is located between the query parser and the planner.
It takes the output of the parser, one querytree, and the rewrite
It takes the output of the parser, one query tree, and the rewrite
rules from the <FileName>pg_rewrite</FileName> catalog, which are
querytrees too with some extra information, and creates zero or many
querytrees as result. So its input and output are always things
query trees too with some extra information, and creates zero or many
query trees as result. So its input and output are always things
the parser itself could have produced and thus, anything it sees
is basically representable as an <Acronym>SQL</Acronym> statement.
</Para>
<Para>
Now what is a querytree? It is an internal representation of an
Now what is a query tree? It is an internal representation of an
<Acronym>SQL</Acronym> statement where the single parts that built
it are stored separately. These querytrees are visible when starting
the <ProductName>Postgres</ProductName> backend with debuglevel 4
it are stored separately. These query trees are visible when starting
the <ProductName>Postgres</ProductName> backend with debug level 4
and typing queries into the interactive backend interface. The rule
actions in the <FileName>pg_rewrite</FileName> system catalog are
also stored as querytrees. They are not formatted like the debug
also stored as query trees. They are not formatted like the debug
output, but they contain exactly the same information.
</Para>
<Para>
Reading a querytree requires some experience and it was a hard
Reading a query tree requires some experience and it was a hard
time when I started to work on the rule system. I can remember
that I was standing at the coffee machine and I saw the cup
in a targetlist, water and coffee powder in a rangetable and all
in a target list, water and coffee powder in a range table and all
the buttons in a qualification expression. Since
<Acronym>SQL</Acronym> representations of querytrees are
<Acronym>SQL</Acronym> representations of query trees are
sufficient to understand the rule system, this document will
not teach how to read them. It might help to learn
it and the naming conventions are required in the later following
@ -85,47 +85,45 @@
</Para>
<Sect2>
<Title>The Parts of a Querytree</Title>
<Title>The Parts of a Query tree</Title>
<Para>
When reading the <Acronym>SQL</Acronym> representations of the
querytrees in this document it is necessary to be able to identify
the parts the statement is broken into when it is in the querytree
structure. The parts of a querytree are
</Para>
query trees in this document it is necessary to be able to identify
the parts the statement is broken into when it is in the query tree
structure. The parts of a query tree are
<Para>
<VariableList>
<VarListEntry>
<Term>
the commandtype
the command type
</Term>
<ListItem>
<Para>
This is a simple value telling which command
(SELECT, INSERT, UPDATE, DELETE) produced the parsetree.
(SELECT, INSERT, UPDATE, DELETE) produced the parse tree.
</Para>
</ListItem>
</VarListEntry>
<VarListEntry>
<Term>
the rangetable
the range table
</Term>
<ListItem>
<Para>
The rangetable is a list of relations that are used in the query.
The range table is a list of relations that are used in the query.
In a SELECT statement these are the relations given after
the FROM keyword.
</Para>
<Para>
Every rangetable entry identifies a table or view and tells
Every range table entry identifies a table or view and tells
by which name it is called in the other parts of the query.
In the querytree the rangetable entries are referenced by
In the query tree the range table entries are referenced by
index rather than by name, so here it doesn't matter if there
are duplicate names as it would in an <Acronym>SQL</Acronym>
statement. This can happen after the rangetables of rules
statement. This can happen after the range tables of rules
have been merged in. The examples in this document will not have
this situation.
</Para>
@ -134,11 +132,11 @@
<VarListEntry>
<Term>
the resultrelation
the result relation
</Term>
<ListItem>
<Para>
This is an index into the rangetable that identifies the
This is an index into the range table that identifies the
relation where the results of the query go.
</Para>
@ -151,7 +149,7 @@
</Para>
<Para>
On INSERT, UPDATE and DELETE queries the resultrelation
On INSERT, UPDATE and DELETE queries the result relation
is the table (or view!) where the changes take effect.
</Para>
</ListItem>
@ -159,11 +157,11 @@
<VarListEntry>
<Term>
the targetlist
the target list
</Term>
<ListItem>
<Para>
The targetlist is a list of expressions that define the result
The target list is a list of expressions that define the result
of the query. In the case of a SELECT, the expressions are what
builds the final output of the query. They are the expressions
between the SELECT and the FROM keywords. (* is just an
@ -173,23 +171,23 @@
</Para>
<Para>
DELETE queries don't need a targetlist because they don't
DELETE queries don't need a target list because they don't
produce any result. In fact the planner will add a special CTID
entry to the empty targetlist. But this is after the rule
entry to the empty target list. But this is after the rule
system and will be discussed later. For the rule system the
targetlist is empty.
target list is empty.
</Para>
<Para>
In INSERT queries the targetlist describes the new rows that
should go into the resultrelation. It is the expressions in the VALUES
In INSERT queries the target list describes the new rows that
should go into the result relation. It is the expressions in the VALUES
clause or the ones from the SELECT clause in INSERT ... SELECT.
Missing columns of the resultrelation will be filled in by the
Missing columns of the result relation will be filled in by the
planner with a constant NULL expression.
</Para>
<Para>
In UPDATE queries, the targetlist describes the new rows that should
In UPDATE queries, the target list describes the new rows that should
replace the old ones. In the rule system, it contains just the
expressions from the SET attribute = expression part of the query.
The planner will add missing columns by inserting expressions that
@ -198,9 +196,9 @@
</Para>
<Para>
Every entry in the targetlist contains an expression that can
Every entry in the target list contains an expression that can
be a constant value, a variable pointing to an attribute of one
of the relations in the rangetable, a parameter, or an expression
of the relations in the range table, a parameter, or an expression
tree made of function calls, constants, variables, operators etc.
</Para>
</ListItem>
@ -213,8 +211,8 @@
<ListItem>
<Para>
The query's qualification is an expression much like one of those
contained in the targetlist entries. The result value of this
expression is a boolean that tells if the operation
contained in the target list entries. The result value of this
expression is a Boolean that tells if the operation
(INSERT, UPDATE, DELETE or SELECT) for the final result row should be
executed or not. It is the WHERE clause of an
<Acronym>SQL</Acronym> statement.
@ -232,7 +230,7 @@
For a simple query like SELECT FROM a, b, c the join tree is just
a list of the FROM items, because we are allowed to join them in
any order. But when JOIN expressions --- particularly outer joins
--- are used, we have to join in the order shown by the JOINs.
--- are used, we have to join in the order shown by the joins.
The join tree shows the structure of the JOIN expressions. The
restrictions associated with particular JOIN clauses (from ON or
USING expressions) are stored as qualification expressions attached
@ -250,7 +248,7 @@
</Term>
<ListItem>
<Para>
The other parts of the querytree like the ORDER BY
The other parts of the query tree like the ORDER BY
clause aren't of interest here. The rule system
substitutes entries there while applying rules, but that
doesn't have much to do with the fundamentals of the rule
@ -304,7 +302,7 @@
Rules ON SELECT are applied to all queries as the
last step, even if the command
given is an INSERT, UPDATE or DELETE. And they have different
semantics from the others in that they modify the parsetree in
semantics from the others in that they modify the parse tree in
place instead of creating a new one.
So SELECT rules are described first.
</Para>
@ -329,9 +327,9 @@
</Para>
<Para>
The database needed to play with the examples is named al_bundy.
The database needed to play with the examples is named <literal>al_bundy</literal>.
You'll see soon why this is the database name. And it needs the
procedural language PL/pgSQL installed, because
procedural language <application>PL/pgSQL</> installed, because
we need a little min() function returning the lower of 2
integer values. We create that as
@ -424,7 +422,7 @@
will create a relation shoelace and an entry
in <FileName>pg_rewrite</FileName>
that tells that there is a rewrite rule that must be applied
whenever the relation shoelace is referenced in a query's rangetable.
whenever the relation shoelace is referenced in a query's range table.
The rule has no rule qualification (discussed later, with the
non SELECT rules, since SELECT rules currently cannot have them) and
it is INSTEAD. Note that rule qualifications are not the same as
@ -432,7 +430,7 @@
</Para>
<Para>
The rule's action is one querytree that is a copy of the
The rule's action is one query tree that is a copy of the
SELECT statement in the view creation command.
<Note>
@ -440,7 +438,7 @@
<Para>
The two extra range
table entries for NEW and OLD (named *NEW* and *CURRENT* for
historical reasons in the printed querytree) you can see in
historical reasons in the printed query tree) you can see in
the <Filename>pg_rewrite</Filename> entry aren't of interest
for SELECT rules.
</Para>
@ -508,10 +506,10 @@
</ProgramListing>
and this is given to the rule system. The rule system walks through the
rangetable and checks if there are rules in <Filename>pg_rewrite</Filename>
for any relation. When processing the rangetable entry for
range table and checks if there are rules in <Filename>pg_rewrite</Filename>
for any relation. When processing the range table entry for
<Filename>shoelace</Filename> (the only one up to now) it finds the
rule '_RETshoelace' with the parsetree
rule <literal>_RETshoelace</literal> with the parse tree
<ProgramListing>
<FirstTerm>SELECT s.sl_name, s.sl_avail,
@ -530,8 +528,8 @@
<Para>
To expand the view, the rewriter simply creates a subselect rangetable
entry containing the rule's action parsetree, and substitutes this
rangetable entry for the original one that referenced the view. The
resulting rewritten parsetree is almost the same as if Al had typed
range table entry for the original one that referenced the view. The
resulting rewritten parse tree is almost the same as if Al had typed
<ProgramListing>
SELECT shoelace.sl_name, shoelace.sl_avail,
@ -547,10 +545,10 @@
WHERE s.sl_unit = u.un_name) shoelace;
</ProgramListing>
There is one difference however: the sub-query's rangetable has two
There is one difference however: the sub-query's range table has two
extra entries shoelace *OLD*, shoelace *NEW*. These entries don't
participate directly in the query, since they aren't referenced by
the sub-query's join tree or targetlist. The rewriter uses them
the sub-query's join tree or target list. The rewriter uses them
to store the access permission check info that was originally present
in the rangetable entry that referenced the view. In this way, the
executor will still check that the user has proper permissions to access
@ -598,7 +596,7 @@
</Para>
<Para>
The output of the parser this time is the parsetree
The output of the parser this time is the parse tree
<ProgramListing>
SELECT shoe_ready.shoename, shoe_ready.sh_avail,
@ -610,7 +608,7 @@
The first rule applied will be the one for the
<Filename>shoe_ready</Filename> view and it results in the
parsetree
parse tree
<ProgramListing>
SELECT shoe_ready.shoename, shoe_ready.sh_avail,
@ -629,8 +627,8 @@
</ProgramListing>
Similarly, the rules for <Filename>shoe</Filename> and
<Filename>shoelace</Filename> are substituted into the rangetable of
the sub-query, leading to a three-level final querytree:
<Filename>shoelace</Filename> are substituted into the range table of
the sub-query, leading to a three-level final query tree:
<ProgramListing>
SELECT shoe_ready.shoename, shoe_ready.sh_avail,
@ -666,7 +664,7 @@
</ProgramListing>
It turns out that the planner will collapse this tree into a two-level
querytree: the bottommost selects will be "pulled up" into the middle
query tree: the bottommost selects will be "pulled up" into the middle
select since there's no need to process them separately. But the
middle select will remain separate from the top, because it contains
aggregate functions. If we pulled those up it would change the behavior
@ -698,18 +696,18 @@
<Title>View Rules in Non-SELECT Statements</Title>
<Para>
Two details of the parsetree aren't touched in the description of
view rules above. These are the commandtype and the resultrelation.
Two details of the parse tree aren't touched in the description of
view rules above. These are the command type and the result relation.
In fact, view rules don't need this information.
</Para>
<Para>
There are only a few differences between a parsetree for a SELECT
and one for any other command. Obviously they have another commandtype
and this time the resultrelation points to the rangetable entry where
There are only a few differences between a parse tree for a SELECT
and one for any other command. Obviously they have another command type
and this time the result relation points to the range table entry where
the result should go. Everything else is absolutely the same.
So having two tables t1 and t2 with attributes
a and b, the parsetrees for the two statements
a and b, the parse trees for the two statements
<ProgramListing>
SELECT t2.b FROM t1, t2 WHERE t1.a = t2.a;
@ -722,14 +720,14 @@
<ItemizedList>
<ListItem>
<Para>
The rangetables contain entries for the tables t1 and t2.
The range tables contain entries for the tables t1 and t2.
</Para>
</ListItem>
<ListItem>
<Para>
The targetlists contain one variable that points to attribute
b of the rangetable entry for table t2.
The target lists contain one variable that points to attribute
b of the range table entry for table t2.
</Para>
</ListItem>
@ -742,15 +740,15 @@
<ListItem>
<Para>
The jointrees show a simple join between t1 and t2.
The join trees show a simple join between t1 and t2.
</Para>
</ListItem>
</ItemizedList>
The consequence is, that both parsetrees result in similar execution
The consequence is, that both parse trees result in similar execution
plans. They are both joins over the two tables. For the UPDATE
the missing columns from t1 are added to the targetlist by the planner
and the final parsetree will read as
the missing columns from t1 are added to the target list by the planner
and the final parse tree will read as
<ProgramListing>
UPDATE t1 SET a = t1.a, b = t2.b WHERE t1.a = t2.a;
@ -768,18 +766,18 @@
for. It just produces a result set of rows. The difference that one
is a SELECT command and the other is an UPDATE is handled in the
caller of the executor. The caller still knows (looking at the
parsetree) that this is an UPDATE, and he knows that this result
parse tree) that this is an UPDATE, and he knows that this result
should go into table t1. But which of the rows that are there
has to be replaced by the new row?
</Para>
<Para>
To resolve this problem, another entry is added to the targetlist
To resolve this problem, another entry is added to the target list
in UPDATE (and also in DELETE) statements: the current tuple ID (ctid).
This is a system attribute containing the file
block number and position in the block for the row. Knowing the table,
the ctid can be used to retrieve the original t1 row to be updated.
After adding the ctid to the targetlist, the query actually looks like
After adding the ctid to the target list, the query actually looks like
<ProgramListing>
SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a;
@ -791,7 +789,7 @@
into the table (after stripping ctid) and in the tuple header of the row
that ctid pointed to the cmax and xmax entries are set to the current
command counter and current transaction ID. Thus the old row is hidden
and after the transaction commited the vacuum cleaner can really move
and after the transaction committed the vacuum cleaner can really move
it out.
</Para>
@ -806,8 +804,8 @@
<Para>
The above demonstrates how the rule system incorporates
view definitions into the original parsetree. In the second example
a simple SELECT from one view created a final parsetree that is
view definitions into the original parse tree. In the second example
a simple SELECT from one view created a final parse tree that is
a join of 4 tables (unit is used twice with different names).
</Para>
@ -821,7 +819,7 @@
relationships between these tables plus the restrictive
qualifications from the views plus the qualifications from
the original query
in one single parsetree. And this is still the situation
in one single parse tree. And this is still the situation
when the original query is already a join over views.
Now the planner has to decide which is
the best path to execute the query. The more information
@ -839,13 +837,13 @@
<Para>
What happens if a view is named as the target relation for an INSERT,
UPDATE, or DELETE? After doing the substitutions described above,
we will have a querytree in which the resultrelation points at a
subquery rangetable entry. This will not work, so the rewriter throws
we will have a query tree in which the result relation points at a
subquery range table entry. This will not work, so the rewriter throws
an error if it sees it has produced such a thing.
</Para>
<Para>
To change this we can define rules that modify the behaviour
To change this we can define rules that modify the behavior
of non-SELECT queries. This is the topic of the next section.
</Para>
</Sect2>
@ -896,8 +894,8 @@
</ListItem>
</ItemizedList>
Second, they don't modify the parsetree in place. Instead they
create zero or many new parsetrees and can throw away the
Second, they don't modify the parse tree in place. Instead they
create zero or many new parse trees and can throw away the
original one.
</Para>
</sect2>
@ -920,10 +918,10 @@
<Para>
Update rules get applied by the rule system when the result
relation and the commandtype of a parsetree are equal to the
relation and the command type of a parse tree are equal to the
object and event given in the CREATE RULE command.
For update rules, the rule system creates a list of parsetrees.
Initially the parsetree list is empty.
For update rules, the rule system creates a list of parse trees.
Initially the parse tree list is empty.
There can be zero (NOTHING keyword), one or multiple actions.
To simplify, we look at a rule with one action. This rule
can have a qualification or not and it can be INSTEAD or not.
@ -938,7 +936,7 @@
</Para>
<Para>
So we have four cases that produce the following parsetrees for
So we have four cases that produce the following parse trees for
a one-action rule.
</Para>
<Para>
@ -949,8 +947,8 @@
<ItemizedList>
<ListItem>
<Para>
The parsetree from the rule action where the
original parsetree's qualification has been added.
The parse tree from the rule action where the
original parse tree's qualification has been added.
</Para>
</ListItem>
</ItemizedList>
@ -963,8 +961,8 @@
<ItemizedList>
<ListItem>
<Para>
The parsetree from the rule action where the
original parsetree's qualification has been added.
The parse tree from the rule action where the
original parse tree's qualification has been added.
</Para>
</ListItem>
</ItemizedList>
@ -977,8 +975,8 @@
<ItemizedList>
<ListItem>
<Para>
The parsetree from the rule action where the rule
qualification and the original parsetree's
The parse tree from the rule action where the rule
qualification and the original parse tree's
qualification have been added.
</Para>
</ListItem>
@ -992,15 +990,15 @@
<ItemizedList>
<ListItem>
<Para>
The parsetree from the rule action where the rule
qualification and the original parsetree's
The parse tree from the rule action where the rule
qualification and the original parse tree's
qualification have been added.
</Para>
</ListItem>
<ListItem>
<Para>
The original parsetree where the negated rule
The original parse tree where the negated rule
qualification has been added.
</Para>
</ListItem>
@ -1010,9 +1008,9 @@
</ItemizedList>
</para>
<Para>
Finally, if the rule is not INSTEAD, the unchanged original parsetree is
Finally, if the rule is not INSTEAD, the unchanged original parse tree is
added to the list. Since only qualified INSTEAD rules already add the
original parsetree, we end up with either one or two output parsetrees
original parse tree, we end up with either one or two output parse trees
for a rule with one action.
</Para>
@ -1027,11 +1025,11 @@
</Para>
<Para>
The parsetrees generated from rule actions are thrown into the
The parse trees generated from rule actions are thrown into the
rewrite system again and maybe more rules get applied resulting
in more or less parsetrees.
So the parsetrees in the rule actions must have either another commandtype
or another resultrelation. Otherwise this recursive process will end up in a loop.
in more or less parse trees.
So the parse trees in the rule actions must have either another command type
or another result relation. Otherwise this recursive process will end up in a loop.
There is a compiled in recursion limit of currently 10 iterations.
If after 10 iterations there are still update rules to apply the
rule system assumes a loop over multiple rule definitions and reports
@ -1138,9 +1136,9 @@
<Para>
The rule is a qualified non-INSTEAD rule, so the rule system
has to return two parsetrees: the modified rule action and the original
parsetree. In the first step the rangetable of the original query is
incorporated into the rule's action parsetree. This results in
has to return two parse trees: the modified rule action and the original
parsetree. In the first step the range table of the original query is
incorporated into the rule's action parse tree. This results in
<ProgramListing>
INSERT INTO shoelace_log VALUES(
@ -1167,9 +1165,9 @@
difficulty with it. They need to support this same functionality
anyway for INSERT ... SELECT.
In step 3 the original parsetree's qualification is added,
restricting the resultset further to only the rows touched
by the original parsetree.
In step 3 the original parse tree's qualification is added,
restricting the result set further to only the rows touched
by the original parse tree.
<ProgramListing>
INSERT INTO shoelace_log VALUES(
@ -1181,8 +1179,8 @@
<FirstTerm>AND bpchareq(shoelace_data.sl_name, 'sl7')</FirstTerm>;
</ProgramListing>
Step 4 substitutes NEW references by the targetlist entries from the
original parsetree or with the matching variable references
Step 4 substitutes NEW references by the target list entries from the
original parse tree or with the matching variable references
from the result relation.
<ProgramListing>
@ -1195,7 +1193,7 @@
AND bpchareq(shoelace_data.sl_name, 'sl7');
</ProgramListing>
Step 5 changes OLD references into resultrelation references.
Step 5 changes OLD references into result relation references.
<ProgramListing>
INSERT INTO shoelace_log VALUES(
@ -1208,8 +1206,8 @@
</ProgramListing>
That's it. Since the rule is not INSTEAD, we also output the
original parsetree. In short, the output from the rule system
is a list of two parsetrees that are the same as the statements:
original parse tree. In short, the output from the rule system
is a list of two parse trees that are the same as the statements:
<ProgramListing>
INSERT INTO shoelace_log VALUES(
@ -1233,7 +1231,7 @@
</ProgramListing>
no log entry would get written. This
time the original parsetree does not contain a targetlist
time the original parse tree does not contain a target list
entry for sl_avail, so NEW.sl_avail will get replaced by
shoelace_data.sl_avail resulting in the extra query
@ -1257,8 +1255,8 @@
four rows in fact get updated (sl1, sl2, sl3 and sl4).
But sl3 already has sl_avail = 0. This time, the original
parsetrees qualification is different and that results
in the extra parsetree
parse trees qualification is different and that results
in the extra parse tree
<ProgramListing>
INSERT INTO shoelace_log SELECT
@ -1269,12 +1267,12 @@
AND <FirstTerm>shoelace_data.sl_color = 'black'</FirstTerm>;
</ProgramListing>
This parsetree will surely insert three new log entries. And
This parse tree will surely insert three new log entries. And
that's absolutely correct.
</Para>
<Para>
Here we can see why it is important that the original parsetree is
Here we can see why it is important that the original parse tree is
executed last.
If the UPDATE would have been executed first, all the rows
are already set to zero, so the logging INSERT
@ -1290,7 +1288,7 @@
<Para>
A simple way to protect view relations from the mentioned
possibility that someone can try to INSERT, UPDATE and DELETE
on them is to let those parsetrees get
on them is to let those parse trees get
thrown away. We create the rules
<ProgramListing>
@ -1306,7 +1304,7 @@
relation <Filename>shoe</Filename>, the rule system will
apply the rules. Since the rules have
no actions and are INSTEAD, the resulting list of
parsetrees will be empty and the whole query will become
parse trees will be empty and the whole query will become
nothing because there is nothing left to be optimized or
executed after the rule system is done with it.
@ -1316,8 +1314,8 @@
This way might irritate frontend applications because
absolutely nothing happened on the database and thus, the
backend will not return anything for the query. Not
even a PGRES_EMPTY_QUERY will be available in libpq.
In psql, nothing happens. This might change in the future.
even a <symbol>PGRES_EMPTY_QUERY</symbol> will be available in <application>libpq</>.
In <application>psql</application>, nothing happens. This might change in the future.
</Para>
</Note>
@ -1325,7 +1323,7 @@
<Para>
A more sophisticated way to use the rule system is to
create rules that rewrite the parsetree into one that
create rules that rewrite the parse tree into one that
does the right operation on the real tables. To do that
on the <Filename>shoelace</Filename> view, we create
the following rules:
@ -1357,10 +1355,10 @@
</ProgramListing>
Now there is a pack of shoelaces arriving in Al's shop and it has
a big partlist. Al is not that good in calculating and so
a big part list. Al is not that good in calculating and so
we don't want him to manually update the shoelace view.
Instead we setup two little tables, one where he can
insert the items from the partlist and one with a special
insert the items from the part list and one with a special
trick. The create commands for these are:
<ProgramListing>
@ -1445,7 +1443,7 @@
It's a long way from the one INSERT ... SELECT to these
results. And its description will be the last in this
document (but not the last example :-). First there was the parsers output
document (but not the last example :-). First there was the parser's output
<ProgramListing>
INSERT INTO shoelace_ok SELECT
@ -1467,7 +1465,7 @@
and throws away the original INSERT on <Filename>shoelace_ok</Filename>.
This rewritten query is passed to the rule system again and
the second applied rule 'shoelace_upd' produced
the second applied rule <literal>shoelace_upd</literal> produced
<ProgramListing>
UPDATE shoelace_data SET
@ -1484,10 +1482,10 @@
AND bpchareq(shoelace_data.sl_name, shoelace.sl_name);
</ProgramListing>
Again it's an INSTEAD rule and the previous parsetree is trashed.
Again it's an INSTEAD rule and the previous parse tree is trashed.
Note that this query still uses the view <Filename>shoelace</Filename>.
But the rule system isn't finished with this loop so it continues
and applies the rule '_RETshoelace' on it and we get
and applies the rule <literal>_RETshoelace</literal> on it and we get
<ProgramListing>
UPDATE shoelace_data SET
@ -1508,8 +1506,8 @@
Again an update rule has been applied and so the wheel
turns on and we are in rewrite round 3. This time rule
'log_shoelace' gets applied what produces the extra
parsetree
<literal>log_shoelace</literal> gets applied what produces the extra
parse tree
<ProgramListing>
INSERT INTO shoelace_log SELECT
@ -1532,8 +1530,8 @@
</ProgramListing>
After that the rule system runs out of rules and returns the
generated parsetrees.
So we end up with two final parsetrees that are equal to the
generated parse trees.
So we end up with two final parse trees that are equal to the
<Acronym>SQL</Acronym> statements
<ProgramListing>
@ -1566,7 +1564,7 @@
<Para>
There is a little detail that's a bit ugly. Looking at
the two queries turns out, that the <Filename>shoelace_data</Filename>
relation appears twice in the rangetable where it could definitely
relation appears twice in the range table where it could definitely
be reduced to one. The planner does not handle it and so the
execution plan for the rule systems output of the INSERT will be
@ -1582,7 +1580,7 @@ Nested Loop
-> Seq Scan on shoelace_data
</ProgramListing>
while omitting the extra rangetable entry would result in a
while omitting the extra range table entry would result in a
<ProgramListing>
Merge Join
@ -1659,7 +1657,7 @@ Merge Join
WHERE sl_name = shoelace.sl_name);
</ProgramListing>
Voila:
Voilà:
<ProgramListing>
al_bundy=> SELECT * FROM shoelace;
@ -1682,7 +1680,7 @@ Merge Join
itself has a subselect qualification containing a view
and where calculated view columns are used,
gets rewritten into
one single parsetree that deletes the requested data
one single parse tree that deletes the requested data
from a real table.
</Para>
@ -1718,7 +1716,7 @@ Merge Join
a relation (table or view) is automatically the owner of the
rewrite rules that are defined for it.
The <ProductName>Postgres</ProductName> rule system changes the
behaviour of the default access control system. Relations that
behavior of the default access control system. Relations that
are used due to rules get checked against the
permissions of the rule owner, not the user invoking the rule.
This means, that a user does only need the required permissions
@ -1822,7 +1820,7 @@ Merge Join
For the things that can be implemented by both,
it depends on the usage of the database, which is the best.
A trigger is fired for any row affected once. A rule manipulates
the parsetree or generates an additional one. So if many
the parse tree or generates an additional one. So if many
rows are affected in one statement, a rule issuing one extra
query would usually do a better job than a trigger that is
called for any single row and must execute his operations
@ -1845,8 +1843,8 @@ Merge Join
</ProgramListing>
Both tables have many
thousands of rows and the index on hostname is unique.
The hostname column contains the full qualified domain
thousands of rows and the index on <structfield>hostname</> is unique.
The <structfield>hostname</> column contains the full qualified domain
name of the computer. The rule/trigger should constraint
delete rows from software that reference the deleted host.
Since the trigger is called for each individual row
@ -1856,7 +1854,7 @@ Merge Join
DELETE FROM software WHERE hostname = $1;
</ProgramListing>
in a prepared and saved plan and pass the hostname in
in a prepared and saved plan and pass the <structfield>hostname</> in
the parameter. The rule would be written as
<ProgramListing>
@ -1890,7 +1888,7 @@ Merge Join
So there would be not that much difference in speed between
the trigger and the rule implementation. With the next delete
we want to get rid of all the 2000 computers where the hostname starts
we want to get rid of all the 2000 computers where the <structfield>hostname</> starts
with 'old'. There are two possible queries to do that. One is
<ProgramListing>
@ -1922,7 +1920,7 @@ Merge Join
</ProgramListing>
This shows, that the planner does not realize that the
qualification for the hostname on computer could also be
qualification for the <structfield>hostname</> on computer could also be
used for an index scan on software when there are
multiple qualification expressions combined with AND, what
he does in the regexp version of the query. The trigger will
@ -1931,7 +1929,7 @@ Merge Join
over computer and 2000 index scans for the software. The
rule implementation will do it with two queries over indexes.
And it depends on the overall size of the software table if
the rule will still be faster in the seqscan situation. 2000
the rule will still be faster in the sequential scan situation. 2000
query executions over the SPI manager take some time, even
if all the index blocks to look them up will soon appear in
the cache.
@ -1946,8 +1944,8 @@ Merge Join
Again this could result in many rows to be deleted from
computer. So the trigger will again fire many queries into
the executor. But the rule plan will again be the Nestloop over
two IndexScan's. Only using another index on computer:
the executor. But the rule plan will again be the nested loop over
two index scans. Only using another index on computer:
<ProgramListing>
Nestloop
@ -1977,7 +1975,7 @@ Merge Join
create a rule as in the shoelace_log example is to do it with
a rule qualification. That results in an extra query that is
performed always, even if the attribute of interest cannot
change at all because it does not appear in the targetlist
change at all because it does not appear in the target list
of the initial query. When this is enabled again, it will be
one more advantage of rules over triggers. Optimization of
a trigger must fail by definition in this case, because the
@ -1986,7 +1984,7 @@ Merge Join
a trigger only allows to specify it on row level, so whenever a
row is touched, the trigger must be called to make its
decision. The rule system will know it by looking up the
targetlist and will suppress the additional query completely
target list and will suppress the additional query completely
if the attribute isn't touched. So the rule, qualified or not,
will only do its scans if there ever could be something to do.
</Para>

View File

@ -1,5 +1,5 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/xaggr.sgml,v 1.12 2001/05/12 22:51:36 petere Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/xaggr.sgml,v 1.13 2001/09/10 21:58:47 petere Exp $
-->
<chapter id="xaggr">
@ -17,7 +17,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/xaggr.sgml,v 1.12 2001/05/12 22:51:36 peter
That is, an aggregate can be
defined in terms of state that is modified whenever an
input item is processed. To define a new aggregate
function, one selects a datatype for the state value,
function, one selects a data type for the state value,
an initial value for the state, and a state transition
function. The state transition function is just an
ordinary function that could also be used outside the
@ -28,8 +28,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/xaggr.sgml,v 1.12 2001/05/12 22:51:36 peter
</para>
<para>
Thus, in addition to the input and result datatypes seen by a user
of the aggregate, there is an internal state-value datatype that
Thus, in addition to the input and result data types seen by a user
of the aggregate, there is an internal state-value data type that
may be different from both the input and result types.
</para>
@ -40,8 +40,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/xaggr.sgml,v 1.12 2001/05/12 22:51:36 peter
example of this kind of aggregate. "Sum" starts at
zero and always adds the current row's value to
its running total. For example, if we want to make a Sum
aggregate to work on a datatype for complex numbers,
we only need the addition function for that datatype.
aggregate to work on a data type for complex numbers,
we only need the addition function for that data type.
The aggregate definition is:
<programlisting>
@ -71,8 +71,8 @@ SELECT complex_sum(a) FROM test_complex;
state condition) if there are no non-null input values.
Perhaps we want to return NULL in that case instead --- SQL92
expects "Sum" to behave that way. We can do this simply by
omitting the "initcond" phrase, so that the initial state
condition is NULL. Ordinarily this would mean that the sfunc
omitting the <literal>initcond</literal> phrase, so that the initial state
condition is NULL. Ordinarily this would mean that the <literal>sfunc</literal>
would need to check for a NULL state-condition input, but for
"Sum" and some other simple aggregates like "Max" and "Min",
it's sufficient to insert the first non-null input value into

View File

@ -1,5 +1,5 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/xfunc.sgml,v 1.34 2001/09/06 10:28:39 petere Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/xfunc.sgml,v 1.35 2001/09/10 21:58:47 petere Exp $
-->
<chapter id="xfunc">
@ -1427,7 +1427,7 @@ LANGUAGE 'c';
in the <literal>prosrc</literal> attribute of the
<classname>pg_proc</classname> table entry. This may be the source
text in the procedural language itself (like for PL/Tcl), a
pathname to a file, or anything else that tells the call handler
path name to a file, or anything else that tells the call handler
what to do in detail.
</para>

View File

@ -1,5 +1,5 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/xindex.sgml,v 1.17 2001/08/31 04:17:13 ishii Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/xindex.sgml,v 1.18 2001/09/10 21:58:47 petere Exp $
Postgres documentation
-->
@ -70,7 +70,7 @@ Postgres documentation
</row>
<row>
<entry>amcanmulticol</entry>
<entry>does AM support multi-column indexes?</entry>
<entry>does AM support multicolumn indexes?</entry>
</row>
<row>
<entry>amindexnulls</entry>
@ -288,7 +288,7 @@ SELECT oid, *
<para>
The above example assumes that you want to make this new opclass the
default B-tree opclass for the <filename>complex</filename> datatype.
default B-tree opclass for the <filename>complex</filename> data type.
If you don't, just set <filename>opcdefault</filename> to false instead.
<filename>opckeytype</filename> is not described here; it should always
be zero for B-tree opclasses.
@ -354,24 +354,24 @@ CREATE FUNCTION complex_abs_eq(complex, complex)
are being defined. We can only have one operator named, say, = and
taking type <filename>complex</filename> for both operands. In this case
we don't have any other operator = for <filename>complex</filename>,
but if we were building a practical datatype we'd probably want = to
but if we were building a practical data type we'd probably want = to
be the ordinary equality operation for complex numbers. In that case,
we'd need to use some other operator name for complex_abs_eq.
</para>
<para>
Second, although Postgres can cope with operators having
the same name as long as they have different input datatypes, C can only
the same name as long as they have different input data types, C can only
cope with one global routine having a given name, period. So we shouldn't
name the C function something simple like <filename>abs_eq</filename>.
Usually it's a good practice to include the datatype name in the C
function name, so as not to conflict with functions for other datatypes.
Usually it's a good practice to include the data type name in the C
function name, so as not to conflict with functions for other data types.
</para>
<para>
Third, we could have made the Postgres name of the function
<filename>abs_eq</filename>, relying on Postgres to distinguish it
by input datatypes from any other Postgres function of the same name.
by input data types from any other Postgres function of the same name.
To keep the example simple, we make the function have the same names
at the C level and Postgres level.
</para>

View File

@ -1,5 +1,5 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/xoper.sgml,v 1.12 2001/05/07 00:43:14 tgl Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/xoper.sgml,v 1.13 2001/09/10 21:58:47 petere Exp $
-->
<Chapter Id="xoper">
@ -113,7 +113,7 @@ SELECT (a + b) AS c FROM test_complex;
commutator of the operator being defined. We say that operator A is the
commutator of operator B if (x A y) equals (y B x) for all possible input
values x,y. Notice that B is also the commutator of A. For example,
operators '<' and '>' for a particular datatype are usually each others'
operators '<' and '>' for a particular data type are usually each others'
commutators, and operator '+' is usually commutative with itself.
But operator '-' is usually not commutative with anything.
</para>
@ -176,7 +176,7 @@ SELECT (a + b) AS c FROM test_complex;
is the negator of operator B if both return boolean results and
(x A y) equals NOT (x B y) for all possible inputs x,y.
Notice that B is also the negator of A.
For example, '<' and '>=' are a negator pair for most datatypes.
For example, '<' and '>=' are a negator pair for most data types.
An operator can never be validly be its own negator.
</para>
@ -260,11 +260,11 @@ SELECT (a + b) AS c FROM test_complex;
</para>
<para>
You can use scalarltsel and scalargtsel for comparisons on datatypes that
You can use scalarltsel and scalargtsel for comparisons on data types that
have some sensible means of being converted into numeric scalars for
range comparisons. If possible, add the datatype to those understood
by the routine convert_to_scalar() in src/backend/utils/adt/selfuncs.c.
(Eventually, this routine should be replaced by per-datatype functions
range comparisons. If possible, add the data type to those understood
by the routine convert_to_scalar() in <filename>src/backend/utils/adt/selfuncs.c</filename>.
(Eventually, this routine should be replaced by per-data-type functions
identified through a column of the pg_type table; but that hasn't happened
yet.) If you do not do this, things will still work, but the optimizer's
estimates won't be as good as they could be.
@ -272,7 +272,7 @@ SELECT (a + b) AS c FROM test_complex;
<para>
There are additional selectivity functions designed for geometric
operators in src/backend/utils/adt/geo_selfuncs.c: areasel, positionsel,
operators in <filename>src/backend/utils/adt/geo_selfuncs.c</filename>: areasel, positionsel,
and contsel. At this writing these are just stubs, but you may want
to use them (or even better, improve them) anyway.
</para>
@ -351,12 +351,12 @@ SELECT (a + b) AS c FROM test_complex;
<para>
There are also machine-dependent ways in which a hash join might fail
to do the right thing. For example, if your datatype
to do the right thing. For example, if your data type
is a structure in which there may be uninteresting pad bits, it's unsafe
to mark the equality operator HASHES. (Unless, perhaps, you write
your other operators to ensure that the unused bits are always zero.)
Another example is that the FLOAT datatypes are unsafe for hash
joins. On machines that meet the IEEE floating point standard, minus
Another example is that the FLOAT data types are unsafe for hash
joins. On machines that meet the <acronym>IEEE</> floating point standard, minus
zero and plus zero are different values (different bit patterns) but
they are defined to compare equal. So, if float equality were marked
HASHES, a minus zero and a plus zero would probably not be matched up
@ -365,7 +365,7 @@ SELECT (a + b) AS c FROM test_complex;
<para>
The bottom line is that you should probably only use HASHES for
equality operators that are (or could be) implemented by memcmp().
equality operators that are (or could be) implemented by <function>memcmp()</function>.
</para>
</sect2>
@ -393,16 +393,16 @@ SELECT (a + b) AS c FROM test_complex;
it is possible to mergejoin two
distinct data types so long as they are logically compatible. For
example, the int2-versus-int4 equality operator is mergejoinable.
We only need sorting operators that will bring both datatypes into a
We only need sorting operators that will bring both data types into a
logically compatible sequence.
</para>
<para>
When specifying merge sort operators, the current operator and both
referenced operators must return boolean; the SORT1 operator must have
both input datatypes equal to the current operator's left argument type,
both input data types equal to the current operator's left argument type,
and the SORT2 operator must have
both input datatypes equal to the current operator's right argument type.
both input data types equal to the current operator's right argument type.
(As with COMMUTATOR and NEGATOR, this means that the operator name is
sufficient to specify the operator, and the system is able to make dummy
operator entries if you happen to define the equality operator before
@ -434,7 +434,7 @@ SELECT (a + b) AS c FROM test_complex;
<listitem>
<para>
There must be '<' and '>' ordering operators having the same left and
right input datatypes as the mergejoinable operator itself. These
right input data types as the mergejoinable operator itself. These
operators <emphasis>must</emphasis> be named '<' and '>'; you do
not have any choice in the matter, since there is no provision for
specifying them explicitly. Note that if the left and right data types

View File

@ -1,5 +1,5 @@
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/xplang.sgml,v 1.14 2001/08/13 21:34:51 petere Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/xplang.sgml,v 1.15 2001/09/10 21:58:47 petere Exp $
-->
<chapter id="xplang">
@ -44,7 +44,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/xplang.sgml,v 1.14 2001/08/13 21:34:51 pete
<para>
For the languages supplied with the standard distribution, the
shell script <filename>createlang</filename> may be used instead
of carrying out the details by hand. For example, to install PL/pgSQL
of carrying out the details by hand. For example, to install <application>PL/pgSQL</application>
into the template1 database, use
<programlisting>
createlang plpgsql template1
@ -102,8 +102,8 @@ CREATE <optional>TRUSTED</optional> <optional>PROCEDURAL</optional> LANGUAGE <re
executed inside the database backend, the <acronym>TRUSTED</acronym>
flag should only be given for
languages that do not allow access to database backends
internals or the filesystem. The languages PL/pgSQL,
PL/Tcl, and PL/Perl are known to be trusted; the language PL/TclU
internals or the file system. The languages <application>PL/pgSQL</application>,
<application>PL/Tcl</application>, and <application>PL/Perl</application> are known to be trusted; the language <application>PL/TclU</application>
should <emphasis>not</emphasis> be marked trusted.
</para>
</step>
@ -111,7 +111,7 @@ CREATE <optional>TRUSTED</optional> <optional>PROCEDURAL</optional> LANGUAGE <re
<para>
In a default <productname>Postgres</productname> installation, the
handler for the PL/pgSQL language is built and installed into the
handler for the <application>PL/pgSQL</application> language is built and installed into the
<quote>library</quote> directory. If Tcl/Tk support is configured
in, the handlers for PL/Tcl and PL/TclU are also built and installed in
the same location. Likewise, the PL/Perl handler is built and installed
@ -125,7 +125,7 @@ CREATE <optional>TRUSTED</optional> <optional>PROCEDURAL</optional> LANGUAGE <re
<step performance="required">
<para>
The following command tells the database where to find the
shared object for the PL/pgSQL language's call handler function.
shared object for the <application>PL/pgSQL</application> language's call handler function.
<programlisting>
CREATE FUNCTION plpgsql_call_handler () RETURNS OPAQUE AS
@ -143,7 +143,7 @@ CREATE TRUSTED PROCEDURAL LANGUAGE plpgsql
</programlisting>
then defines that the previously declared call handler function
should be invoked for functions and trigger procedures where the
language attribute is 'plpgsql'.
language attribute is <literal>plpgsql</literal>.
</para>
</step>
</procedure>