User-Defined Functionsfunctionuser-definedPostgreSQL provides four kinds of
functions:
query language functions (functions written in
SQL) ()
procedural language functions (functions written in, for
example, PL/pgSQL> or PL/Tcl>)
()
internal functions ()
C-language functions ()
Every kind
of function can take base types, composite types, or
combinations of these as arguments (parameters). In addition,
every kind of function can return a base type or
a composite type. Functions may also be defined to return
sets of base or composite values.
Many kinds of functions can take or return certain pseudo-types
(such as polymorphic types), but the available facilities vary.
Consult the description of each kind of function for more details.
It's easiest to define SQL
functions, so we'll start by discussing those.
Most of the concepts presented for SQL functions
will carry over to the other types of functions.
Throughout this chapter, it can be useful to look at the reference
page of the command to
understand the examples better. Some examples from this chapter
can be found in funcs.sql and
funcs.c in the src/tutorial>
directory in the PostgreSQL source
distribution.
Query Language (SQL) Functionsfunctionuser-definedin SQL
SQL functions execute an arbitrary list of SQL statements, returning
the result of the last query in the list.
In the simple (non-set)
case, the first row of the last query's result will be returned.
(Bear in mind that the first row of a multirow
result is not well-defined unless you use ORDER BY>.)
If the last query happens
to return no rows at all, the null value will be returned.
SETOF>function>> Alternatively,
an SQL function may be declared to return a set, by specifying the
function's return type as SETOF
sometype>. In this case all rows of the
last query's result are returned. Further details appear below.
The body of an SQL function must be a list of SQL
statements separated by semicolons. A semicolon after the last
statement is optional. Unless the function is declared to return
void>, the last statement must be a SELECT>.
Any collection of commands in the SQL
language can be packaged together and defined as a function.
Besides SELECT queries, the commands can include data
modification queries (INSERT,
UPDATE, and DELETE), as well as
other SQL commands. (The only exception is that you can't put
BEGIN>, COMMIT>, ROLLBACK>, or
SAVEPOINT> commands into a SQL function.)
However, the final command
must be a SELECT that returns whatever is
specified as the function's return type. Alternatively, if you
want to define a SQL function that performs actions but has no
useful value to return, you can define it as returning void>.
In that case, the function body must not end with a SELECT.
For example, this function removes rows with negative salaries from
the emp> table:
CREATE FUNCTION clean_emp() RETURNS void AS '
DELETE FROM emp
WHERE salary < 0;
' LANGUAGE SQL;
SELECT clean_emp();
clean_emp
-----------
(1 row)
The syntax of the CREATE FUNCTION command requires
the function body to be written as a string constant. It is usually
most convenient to use dollar quoting (see ) for the string constant.
If you choose to use regular single-quoted string constant syntax,
you must escape single quote marks ('>) and backslashes
(\>) used in the body of the function, typically by
doubling them (see ).
Arguments to the SQL function are referenced in the function
body using the syntax $n>>: $1>
refers to the first argument, $2> to the second, and so on.
If an argument is of a composite type, then the dot notation,
e.g., $1.name, may be used to access attributes
of the argument. The arguments can only be used as data values,
not as identifiers. Thus for example this is reasonable:
INSERT INTO mytable VALUES ($1);
but this will not work:
INSERT INTO $1 VALUES (42);
SQL Functions on Base Types
The simplest possible SQL function has no arguments and
simply returns a base type, such as integer:
CREATE FUNCTION one() RETURNS integer AS $$
SELECT 1 AS result;
$$ LANGUAGE SQL;
-- Alternative syntax for string literal:
CREATE FUNCTION one() RETURNS integer AS '
SELECT 1 AS result;
' LANGUAGE SQL;
SELECT one();
one
-----
1
Notice that we defined a column alias within the function body for the result of the function
(with the name result>), but this column alias is not visible
outside the function. Hence, the result is labeled one>
instead of result>.
It is almost as easy to define SQL functions
that take base types as arguments. In the example below, notice
how we refer to the arguments within the function as $1>
and $2>.
CREATE FUNCTION add_em(integer, integer) RETURNS integer AS $$
SELECT $1 + $2;
$$ LANGUAGE SQL;
SELECT add_em(1, 2) AS answer;
answer
--------
3
Here is a more useful function, which might be used to debit a
bank account:
CREATE FUNCTION tf1 (integer, numeric) RETURNS integer AS $$
UPDATE bank
SET balance = balance - $2
WHERE accountno = $1;
SELECT 1;
$$ LANGUAGE SQL;
A user could execute this function to debit account 17 by $100.00 as
follows:
SELECT tf1(17, 100.0);
In practice one would probably like a more useful result from the
function than a constant 1, so a more likely definition
is
CREATE FUNCTION tf1 (integer, numeric) RETURNS numeric AS $$
UPDATE bank
SET balance = balance - $2
WHERE accountno = $1;
SELECT balance FROM bank WHERE accountno = $1;
$$ LANGUAGE SQL;
which adjusts the balance and returns the new balance.
SQL Functions on Composite Types
When writing functions with arguments of composite
types, we must not only specify which
argument we want (as we did above with $1> and $2) but
also the desired attribute (field) of that argument. For example,
suppose that
emp is a table containing employee data, and therefore
also the name of the composite type of each row of the table. Here
is a function double_salary that computes what someone's
salary would be if it were doubled:
CREATE TABLE emp (
name text,
salary numeric,
age integer,
cubicle point
);
CREATE FUNCTION double_salary(emp) RETURNS numeric AS $$
SELECT $1.salary * 2 AS salary;
$$ LANGUAGE SQL;
SELECT name, double_salary(emp.*) AS dream
FROM emp
WHERE emp.cubicle ~= point '(2,1)';
name | dream
------+-------
Bill | 8400
Notice the use of the syntax $1.salary
to select one field of the argument row value. Also notice
how the calling SELECT> command uses *>
to select
the entire current row of a table as a composite value. The table
row can alternatively be referenced using just the table name,
like this:
SELECT name, double_salary(emp) AS dream
FROM emp
WHERE emp.cubicle ~= point '(2,1)';
but this usage is deprecated since it's easy to get confused.
Sometimes it is handy to construct a composite argument value
on-the-fly. This can be done with the ROW> construct.
For example, we could adjust the data being passed to the function:
SELECT name, double_salary(ROW(name, salary*1.1, age, cubicle)) AS dream
FROM emp;
It is also possible to build a function that returns a composite type.
This is an example of a function
that returns a single emp row:
CREATE FUNCTION new_emp() RETURNS emp AS $$
SELECT text 'None' AS name,
1000.0 AS salary,
25 AS age,
point '(2,2)' AS cubicle;
$$ LANGUAGE SQL;
In this example we have specified each of the attributes
with a constant value, but any computation
could have been substituted for these constants.
Note two important things about defining the function:
The select list order in the query must be exactly the same as
that in which the columns appear in the table associated
with the composite type. (Naming the columns, as we did above,
is irrelevant to the system.)
You must typecast the expressions to match the
definition of the composite type, or you will get errors like this:
ERROR: function declared to return emp returns varchar instead of text at column 1
A different way to define the same function is:
CREATE FUNCTION new_emp() RETURNS emp AS $$
SELECT ROW('None', 1000.0, 25, '(2,2)')::emp;
$$ LANGUAGE SQL;
Here we wrote a SELECT> that returns just a single
column of the correct composite type. This isn't really better
in this situation, but it is a handy alternative in some cases
— for example, if we need to compute the result by calling
another function that returns the desired composite value.
We could call this function directly in either of two ways:
SELECT new_emp();
new_emp
--------------------------
(None,1000.0,25,"(2,2)")
SELECT * FROM new_emp();
name | salary | age | cubicle
------+--------+-----+---------
None | 1000.0 | 25 | (2,2)
The second way is described more fully in .
When you use a function that returns a composite type,
you might want only one field (attribute) from its result.
You can do that with syntax like this:
SELECT (new_emp()).name;
name
------
None
The extra parentheses are needed to keep the parser from getting
confused. If you try to do it without them, you get something like this:
SELECT new_emp().name;
ERROR: syntax error at or near "." at character 17
LINE 1: SELECT new_emp().name;
^
Another option is to use
functional notation for extracting an attribute. The simple way
to explain this is that we can use the
notations attribute(table)> and table.attribute>
interchangeably.
SELECT name(new_emp());
name
------
None
-- This is the same as:
-- SELECT emp.name AS youngster FROM emp WHERE emp.age < 30;
SELECT name(emp) AS youngster FROM emp WHERE age(emp) < 30;
youngster
-----------
Sam
Andy
The equivalence between functional notation and attribute notation
makes it possible to use functions on composite types to emulate
computed fields>.
computed fieldfieldcomputed
For example, using the previous definition
for double_salary(emp)>, we can write
SELECT emp.name, emp.double_salary FROM emp;
An application using this wouldn't need to be directly aware that
double_salary> isn't a real column of the table.
(You can also emulate computed fields with views.)
Another way to use a function returning a composite type is to pass the
result to another function that accepts the correct row type as input:
CREATE FUNCTION getname(emp) RETURNS text AS $$
SELECT $1.name;
$$ LANGUAGE SQL;
SELECT getname(new_emp());
getname
---------
None
(1 row)
Still another way to use a function that returns a composite type is to
call it as a table function, as described in .
Functions with Output Parametersfunctionoutput parameter
An alternative way of describing a function's results is to define it
with output parameters>, as in this example:
CREATE FUNCTION add_em (IN x int, IN y int, OUT sum int)
AS 'SELECT $1 + $2'
LANGUAGE SQL;
SELECT add_em(3,7);
add_em
--------
10
(1 row)
This is not essentially different from the version of add_em>
shown in . The real value of
output parameters is that they provide a convenient way of defining
functions that return several columns. For example,
CREATE FUNCTION sum_n_product (x int, y int, OUT sum int, OUT product int)
AS 'SELECT $1 + $2, $1 * $2'
LANGUAGE SQL;
SELECT * FROM sum_n_product(11,42);
sum | product
-----+---------
53 | 462
(1 row)
What has essentially happened here is that we have created an anonymous
composite type for the result of the function. The above example has
the same end result as
CREATE TYPE sum_prod AS (sum int, product int);
CREATE FUNCTION sum_n_product (int, int) RETURNS sum_prod
AS 'SELECT $1 + $2, $1 * $2'
LANGUAGE SQL;
but not having to bother with the separate composite type definition
is often handy.
Notice that output parameters are not included in the calling argument
list when invoking such a function from SQL. This is because
PostgreSQL considers only the input
parameters to define the function's calling signature. That means
also that only the input parameters matter when referencing the function
for purposes such as dropping it. We could drop the above function
with either of
DROP FUNCTION sum_n_product (x int, y int, OUT sum int, OUT product int);
DROP FUNCTION sum_n_product (int, int);
Parameters can be marked as IN> (the default),
OUT>, or INOUT>. An INOUT>
parameter serves as both an input parameter (part of the calling
argument list) and an output parameter (part of the result record type).
SQL Functions as Table Sources
All SQL functions may be used in the FROM> clause of a query,
but it is particularly useful for functions returning composite types.
If the function is defined to return a base type, the table function
produces a one-column table. If the function is defined to return
a composite type, the table function produces a column for each attribute
of the composite type.
Here is an example:
CREATE TABLE foo (fooid int, foosubid int, fooname text);
INSERT INTO foo VALUES (1, 1, 'Joe');
INSERT INTO foo VALUES (1, 2, 'Ed');
INSERT INTO foo VALUES (2, 1, 'Mary');
CREATE FUNCTION getfoo(int) RETURNS foo AS $$
SELECT * FROM foo WHERE fooid = $1;
$$ LANGUAGE SQL;
SELECT *, upper(fooname) FROM getfoo(1) AS t1;
fooid | foosubid | fooname | upper
-------+----------+---------+-------
1 | 1 | Joe | JOE
(1 row)
As the example shows, we can work with the columns of the function's
result just the same as if they were columns of a regular table.
Note that we only got one row out of the function. This is because
we did not use SETOF>. That is described in the next section.
SQL Functions Returning Sets
When an SQL function is declared as returning SETOF
sometype>, the function's final
SELECT> query is executed to completion, and each row it
outputs is returned as an element of the result set.
This feature is normally used when calling the function in the FROM>
clause. In this case each row returned by the function becomes
a row of the table seen by the query. For example, assume that
table foo> has the same contents as above, and we say:
CREATE FUNCTION getfoo(int) RETURNS SETOF foo AS $$
SELECT * FROM foo WHERE fooid = $1;
$$ LANGUAGE SQL;
SELECT * FROM getfoo(1) AS t1;
Then we would get:
fooid | foosubid | fooname
-------+----------+---------
1 | 1 | Joe
1 | 2 | Ed
(2 rows)
Currently, functions returning sets may also be called in the select list
of a query. For each row that the query
generates by itself, the function returning set is invoked, and an output
row is generated for each element of the function's result set. Note,
however, that this capability is deprecated and may be removed in future
releases. The following is an example function returning a set from the
select list:
CREATE FUNCTION listchildren(text) RETURNS SETOF text AS $$
SELECT name FROM nodes WHERE parent = $1
$$ LANGUAGE SQL;
SELECT * FROM nodes;
name | parent
-----------+--------
Top |
Child1 | Top
Child2 | Top
Child3 | Top
SubChild1 | Child1
SubChild2 | Child1
(6 rows)
SELECT listchildren('Top');
listchildren
--------------
Child1
Child2
Child3
(3 rows)
SELECT name, listchildren(name) FROM nodes;
name | listchildren
--------+--------------
Top | Child1
Top | Child2
Top | Child3
Child1 | SubChild1
Child1 | SubChild2
(5 rows)
In the last SELECT,
notice that no output row appears for Child2>, Child3>, etc.
This happens because listchildren returns an empty set
for those arguments, so no result rows are generated.
Polymorphic SQL Functions
SQL functions may be declared to accept and
return the polymorphic types anyelement and
anyarray. See for a more detailed
explanation of polymorphic functions. Here is a polymorphic
function make_array that builds up an array
from two arbitrary data type elements:
CREATE FUNCTION make_array(anyelement, anyelement) RETURNS anyarray AS $$
SELECT ARRAY[$1, $2];
$$ LANGUAGE SQL;
SELECT make_array(1, 2) AS intarray, make_array('a'::text, 'b') AS textarray;
intarray | textarray
----------+-----------
{1,2} | {a,b}
(1 row)
Notice the use of the typecast 'a'::text
to specify that the argument is of type text. This is
required if the argument is just a string literal, since otherwise
it would be treated as type
unknown, and array of unknown is not a valid
type.
Without the typecast, you will get errors like this:
ERROR: could not determine "anyarray"/"anyelement" type because input has type "unknown"
It is permitted to have polymorphic arguments with a fixed
return type, but the converse is not. For example:
CREATE FUNCTION is_greater(anyelement, anyelement) RETURNS boolean AS $$
SELECT $1 > $2;
$$ LANGUAGE SQL;
SELECT is_greater(1, 2);
is_greater
------------
f
(1 row)
CREATE FUNCTION invalid_func() RETURNS anyelement AS $$
SELECT 1;
$$ LANGUAGE SQL;
ERROR: cannot determine result data type
DETAIL: A function returning "anyarray" or "anyelement" must have at least one argument of either type.
Polymorphism can be used with functions that have output arguments.
For example:
CREATE FUNCTION dup (f1 anyelement, OUT f2 anyelement, OUT f3 anyarray)
AS 'select $1, array[$1,$1]' LANGUAGE sql;
SELECT * FROM dup(22);
f2 | f3
----+---------
22 | {22,22}
(1 row)
Function Overloadingoverloadingfunctions
More than one function may be defined with the same SQL name, so long
as the arguments they take are different. In other words,
function names can be overloaded. When a
query is executed, the server will determine which function to
call from the data types and the number of the provided arguments.
Overloading can also be used to simulate functions with a variable
number of arguments, up to a finite maximum number.
When creating a family of overloaded functions, one should be
careful not to create ambiguities. For instance, given the
functions
CREATE FUNCTION test(int, real) RETURNS ...
CREATE FUNCTION test(smallint, double precision) RETURNS ...
it is not immediately clear which function would be called with
some trivial input like test(1, 1.5). The
currently implemented resolution rules are described in
, but it is unwise to design a system that subtly
relies on this behavior.
A function that takes a single argument of a composite type should
generally not have the same name as any attribute (field) of that type.
Recall that attribute(table) is considered equivalent
to table.attribute. In the case that there is an
ambiguity between a function on a composite type and an attribute of
the composite type, the attribute will always be used. It is possible
to override that choice by schema-qualifying the function name
(that is, schema.func(table)) but it's better to
avoid the problem by not choosing conflicting names.
When overloading C-language functions, there is an additional
constraint: The C name of each function in the family of
overloaded functions must be different from the C names of all
other functions, either internal or dynamically loaded. If this
rule is violated, the behavior is not portable. You might get a
run-time linker error, or one of the functions will get called
(usually the internal one). The alternative form of the
AS> clause for the SQL CREATE
FUNCTION command decouples the SQL function name from
the function name in the C source code. For instance,
CREATE FUNCTION test(int) RETURNS int
AS 'filename>', 'test_1arg'
LANGUAGE C;
CREATE FUNCTION test(int, int) RETURNS int
AS 'filename>', 'test_2arg'
LANGUAGE C;
The names of the C functions here reflect one of many possible conventions.
Function Volatility CategoriesvolatilityfunctionsVOLATILESTABLEIMMUTABLE
Every function has a volatility> classification, with
the possibilities being VOLATILE>, STABLE>, or
IMMUTABLE>. VOLATILE> is the default if the
command does not specify a category. The volatility category is a
promise to the optimizer about the behavior of the function:
A VOLATILE> function can do anything, including modifying
the database. It can return different results on successive calls with
the same arguments. The optimizer makes no assumptions about the
behavior of such functions. A query using a volatile function will
re-evaluate the function at every row where its value is needed.
A STABLE> function cannot modify the database and is
guaranteed to return the same results given the same arguments
for all calls within a single surrounding query. This category
allows the optimizer to optimize away multiple calls of the function
within a single query. In particular, it is safe to use an expression
containing such a function in an index scan condition. (Since an
index scan will evaluate the comparison value only once, not once at
each row, it is not valid to use a VOLATILE> function in
an index scan condition.)
An IMMUTABLE> function cannot modify the database and is
guaranteed to return the same results given the same arguments forever.
This category allows the optimizer to pre-evaluate the function when
a query calls it with constant arguments. For example, a query like
SELECT ... WHERE x = 2 + 2> can be simplified on sight to
SELECT ... WHERE x = 4>, because the function underlying
the integer addition operator is marked IMMUTABLE>.
For best optimization results, you should label your functions with the
strictest volatility category that is valid for them.
Any function with side-effects must> be labeled
VOLATILE>, so that calls to it cannot be optimized away.
Even a function with no side-effects needs to be labeled
VOLATILE> if its value can change within a single query;
some examples are random()>, currval()>,
timeofday()>.
There is relatively little difference between STABLE> and
IMMUTABLE> categories when considering simple interactive
queries that are planned and immediately executed: it doesn't matter
a lot whether a function is executed once during planning or once during
query execution startup. But there is a big difference if the plan is
saved and reused later. Labeling a function IMMUTABLE> when
it really isn't may allow it to be prematurely folded to a constant during
planning, resulting in a stale value being re-used during subsequent uses
of the plan. This is a hazard when using prepared statements or when
using function languages that cache plans (such as
PL/pgSQL>).
Because of the snapshotting behavior of MVCC (see )
a function containing only SELECT> commands can safely be
marked STABLE>, even if it selects from tables that might be
undergoing modifications by concurrent queries.
PostgreSQL will execute a STABLE>
function using the snapshot established for the calling query, and so it
will see a fixed view of the database throughout that query.
Also note
that the current_timestamp> family of functions qualify
as stable, since their values do not change within a transaction.
The same snapshotting behavior is used for SELECT> commands
within IMMUTABLE> functions. It is generally unwise to select
from database tables within an IMMUTABLE> function at all,
since the immutability will be broken if the table contents ever change.
However, PostgreSQL does not enforce that you
do not do that.
A common error is to label a function IMMUTABLE> when its
results depend on a configuration parameter. For example, a function
that manipulates timestamps might well have results that depend on the
setting. For safety, such functions should
be labeled STABLE> instead.
Before PostgreSQL release 8.0, the requirement
that STABLE> and IMMUTABLE> functions cannot modify
the database was not enforced by the system. Release 8.0 enforces it
by requiring SQL functions and procedural language functions of these
categories to contain no SQL commands other than SELECT>.
(This is not a completely bulletproof test, since such functions could
still call VOLATILE> functions that modify the database.
If you do that, you will find that the STABLE> or
IMMUTABLE> function does not notice the database changes
applied by the called function.)
Procedural Language FunctionsPostgreSQL allows user-defined functions
to be written in other languages besides SQL and C. These other
languages are generically called procedural
languages (PL>s).
Procedural languages aren't built into the
PostgreSQL server; they are offered
by loadable modules.
See and following chapters for more
information.
Internal Functionsfunction>internal>>
Internal functions are functions written in C that have been statically
linked into the PostgreSQL server.
The body of the function definition
specifies the C-language name of the function, which need not be the
same as the name being declared for SQL use.
(For reasons of backwards compatibility, an empty body
is accepted as meaning that the C-language function name is the
same as the SQL name.)
Normally, all internal functions present in the
server are declared during the initialization of the database cluster (initdb),
but a user could use CREATE FUNCTION
to create additional alias names for an internal function.
Internal functions are declared in CREATE FUNCTION
with language name internal. For instance, to
create an alias for the sqrt function:
CREATE FUNCTION square_root(double precision) RETURNS double precision
AS 'dsqrt'
LANGUAGE internal
STRICT;
(Most internal functions expect to be declared strict.)
Not all predefined functions are
internal in the above sense. Some predefined
functions are written in SQL.
C-Language Functionsfunctionuser-definedin C
User-defined functions can be written in C (or a language that can
be made compatible with C, such as C++). Such functions are
compiled into dynamically loadable objects (also called shared
libraries) and are loaded by the server on demand. The dynamic
loading feature is what distinguishes C language> functions
from internal> functions — the actual coding conventions
are essentially the same for both. (Hence, the standard internal
function library is a rich source of coding examples for user-defined
C functions.)
Two different calling conventions are currently used for C functions.
The newer version 1 calling convention is indicated by writing
a PG_FUNCTION_INFO_V1() macro call for the function,
as illustrated below. Lack of such a macro indicates an old-style
(version 0) function. The language name specified in CREATE FUNCTION
is C in either case. Old-style functions are now deprecated
because of portability problems and lack of functionality, but they
are still supported for compatibility reasons.
Dynamic Loadingdynamic loading
The first time a user-defined function in a particular
loadable object file is called in a session,
the dynamic loader loads that object file into memory so that the
function can be called. The CREATE FUNCTION
for a user-defined C function must therefore specify two pieces of
information for the function: the name of the loadable
object file, and the C name (link symbol) of the specific function to call
within that object file. If the C name is not explicitly specified then
it is assumed to be the same as the SQL function name.
The following algorithm is used to locate the shared object file
based on the name given in the CREATE FUNCTION
command:
If the name is an absolute path, the given file is loaded.
If the name starts with the string $libdir,
that part is replaced by the PostgreSQL> package
library directory
name, which is determined at build time.$libdir>>
If the name does not contain a directory part, the file is
searched for in the path specified by the configuration variable
.dynamic_library_path>>
Otherwise (the file was not found in the path, or it contains a
non-absolute directory part), the dynamic loader will try to
take the name as given, which will most likely fail. (It is
unreliable to depend on the current working directory.)
If this sequence does not work, the platform-specific shared
library file name extension (often .so) is
appended to the given name and this sequence is tried again. If
that fails as well, the load will fail.
The user ID the PostgreSQL server runs
as must be able to traverse the path to the file you intend to
load. Making the file or a higher-level directory not readable
and/or not executable by the postgres
user is a common mistake.
In any case, the file name that is given in the
CREATE FUNCTION command is recorded literally
in the system catalogs, so if the file needs to be loaded again
the same procedure is applied.
PostgreSQL will not compile a C function
automatically. The object file must be compiled before it is referenced
in a CREATE
FUNCTION> command. See for additional
information.
After it is used for the first time, a dynamically loaded object
file is retained in memory. Future calls in the same session to
the function(s) in that file will only incur the small overhead of
a symbol table lookup. If you need to force a reload of an object
file, for example after recompiling it, use the command or begin a
fresh session.
It is recommended to locate shared libraries either relative to
$libdir or through the dynamic library path.
This simplifies version upgrades if the new installation is at a
different location. The actual directory that
$libdir stands for can be found out with the
command pg_config --pkglibdir.
Before PostgreSQL release 7.2, only
exact absolute paths to object files could be specified in
CREATE FUNCTION>. This approach is now deprecated
since it makes the function definition unnecessarily unportable.
It's best to specify just the shared library name with no path nor
extension, and let the search mechanism provide that information
instead.
Base Types in C-Language Functionsdata typeinternal organization
To know how to write C-language functions, you need to know how
PostgreSQL internally represents base
data types and how they can be passed to and from functions.
Internally, PostgreSQL regards a base
type as a blob of memory. The user-defined
functions that you define over a type in turn define the way that
PostgreSQL can operate on it. That
is, PostgreSQL will only store and
retrieve the data from disk and use your user-defined functions
to input, process, and output the data.
Base types can have one of three internal formats:
pass by value, fixed-length
pass by reference, fixed-length
pass by reference, variable-length
By-value types can only be 1, 2, or 4 bytes in length
(also 8 bytes, if sizeof(Datum) is 8 on your machine).
You should be careful
to define your types such that they will be the same
size (in bytes) on all architectures. For example, the
long type is dangerous because it
is 4 bytes on some machines and 8 bytes on others, whereas
int type is 4 bytes on most
Unix machines. A reasonable implementation of
the int4 type on Unix
machines might be:
/* 4-byte integer, passed by value */
typedef int int4;
On the other hand, fixed-length types of any size may
be passed by-reference. For example, here is a sample
implementation of a PostgreSQL type:
/* 16-byte structure, passed by reference */
typedef struct
{
double x, y;
} Point;
Only pointers to such types can be used when passing
them in and out of PostgreSQL functions.
To return a value of such a type, allocate the right amount of
memory with palloc, fill in the allocated memory,
and return a pointer to it. (Also, if you just want to return the
same value as one of your input arguments that's of the same data type,
you can skip the extra palloc and just return the
pointer to the input value.)
Finally, all variable-length types must also be passed
by reference. All variable-length types must begin
with a length field of exactly 4 bytes, and all data to
be stored within that type must be located in the memory
immediately following that length field. The
length field contains the total length of the structure,
that is, it includes the size of the length field
itself.
Never> modify the contents of a pass-by-reference input
value. If you do so you are likely to corrupt on-disk data, since
the pointer you are given may well point directly into a disk buffer.
The sole exception to this rule is explained in
.
As an example, we can define the type text as
follows:
typedef struct {
int4 length;
char data[1];
} text;
Obviously, the data field declared here is not long enough to hold
all possible strings. Since it's impossible to declare a variable-size
structure in C, we rely on the knowledge that the
C compiler won't range-check array subscripts. We
just allocate the necessary amount of space and then access the array as
if it were declared the right length. (This is a common trick, which
you can read about in many textbooks about C.)
When manipulating
variable-length types, we must be careful to allocate
the correct amount of memory and set the length field correctly.
For example, if we wanted to store 40 bytes in a text>
structure, we might use a code fragment like this:
#include "postgres.h"
...
char buffer[40]; /* our source data */
...
text *destination = (text *) palloc(VARHDRSZ + 40);
destination->length = VARHDRSZ + 40;
memcpy(destination->data, buffer, 40);
...
VARHDRSZ> is the same as sizeof(int4)>, but
it's considered good style to use the macro VARHDRSZ>
to refer to the size of the overhead for a variable-length type.
specifies which C type
corresponds to which SQL type when writing a C-language function
that uses a built-in type of PostgreSQL>.
The Defined In column gives the header file that
needs to be included to get the type definition. (The actual
definition may be in a different file that is included by the
listed file. It is recommended that users stick to the defined
interface.) Note that you should always include
postgres.h first in any source file, because
it declares a number of things that you will need anyway.
Equivalent C Types for Built-In SQL Types
SQL Type
C Type
Defined In
abstimeAbsoluteTimeutils/nabstime.hbooleanboolpostgres.h (maybe compiler built-in)boxBOX*utils/geo_decls.hbyteabytea*postgres.h"char"char(compiler built-in)characterBpChar*postgres.hcidCommandIdpostgres.hdateDateADTutils/date.hsmallint (int2)int2 or int16postgres.hint2vectorint2vector*postgres.hinteger (int4)int4 or int32postgres.hreal (float4)float4*postgres.hdouble precision (float8)float8*postgres.hintervalInterval*utils/timestamp.hlsegLSEG*utils/geo_decls.hnameNamepostgres.hoidOidpostgres.hoidvectoroidvector*postgres.hpathPATH*utils/geo_decls.hpointPOINT*utils/geo_decls.hregprocregprocpostgres.hreltimeRelativeTimeutils/nabstime.htexttext*postgres.htidItemPointerstorage/itemptr.htimeTimeADTutils/date.htime with time zoneTimeTzADTutils/date.htimestampTimestamp*utils/timestamp.htintervalTimeIntervalutils/nabstime.hvarcharVarChar*postgres.hxidTransactionIdpostgres.h
Now that we've gone over all of the possible structures
for base types, we can show some examples of real functions.
Calling Conventions Version 0 for C-Language Functions
We present the old style calling convention first — although
this approach is now deprecated, it's easier to get a handle on
initially. In the version-0 method, the arguments and result
of the C function are just declared in normal C style, but being
careful to use the C representation of each SQL data type as shown
above.
Here are some examples:
#include "postgres.h"
#include <string.h>
/* by value */
int
add_one(int arg)
{
return arg + 1;
}
/* by reference, fixed length */
float8 *
add_one_float8(float8 *arg)
{
float8 *result = (float8 *) palloc(sizeof(float8));
*result = *arg + 1.0;
return result;
}
Point *
makepoint(Point *pointx, Point *pointy)
{
Point *new_point = (Point *) palloc(sizeof(Point));
new_point->x = pointx->x;
new_point->y = pointy->y;
return new_point;
}
/* by reference, variable length */
text *
copytext(text *t)
{
/*
* VARSIZE is the total size of the struct in bytes.
*/
text *new_t = (text *) palloc(VARSIZE(t));
VARATT_SIZEP(new_t) = VARSIZE(t);
/*
* VARDATA is a pointer to the data region of the struct.
*/
memcpy((void *) VARDATA(new_t), /* destination */
(void *) VARDATA(t), /* source */
VARSIZE(t)-VARHDRSZ); /* how many bytes */
return new_t;
}
text *
concat_text(text *arg1, text *arg2)
{
int32 new_text_size = VARSIZE(arg1) + VARSIZE(arg2) - VARHDRSZ;
text *new_text = (text *) palloc(new_text_size);
VARATT_SIZEP(new_text) = new_text_size;
memcpy(VARDATA(new_text), VARDATA(arg1), VARSIZE(arg1)-VARHDRSZ);
memcpy(VARDATA(new_text) + (VARSIZE(arg1)-VARHDRSZ),
VARDATA(arg2), VARSIZE(arg2)-VARHDRSZ);
return new_text;
}
Supposing that the above code has been prepared in file
funcs.c and compiled into a shared object,
we could define the functions to PostgreSQL
with commands like this:
CREATE FUNCTION add_one(integer) RETURNS integer
AS 'DIRECTORY/funcs', 'add_one'
LANGUAGE C STRICT;
-- note overloading of SQL function name "add_one"
CREATE FUNCTION add_one(double precision) RETURNS double precision
AS 'DIRECTORY/funcs', 'add_one_float8'
LANGUAGE C STRICT;
CREATE FUNCTION makepoint(point, point) RETURNS point
AS 'DIRECTORY/funcs', 'makepoint'
LANGUAGE C STRICT;
CREATE FUNCTION copytext(text) RETURNS text
AS 'DIRECTORY/funcs', 'copytext'
LANGUAGE C STRICT;
CREATE FUNCTION concat_text(text, text) RETURNS text
AS 'DIRECTORY/funcs', 'concat_text'
LANGUAGE C STRICT;
Here, DIRECTORY stands for the
directory of the shared library file (for instance the
PostgreSQL tutorial directory, which
contains the code for the examples used in this section).
(Better style would be to use just 'funcs'> in the
AS> clause, after having added
DIRECTORY to the search path. In any
case, we may omit the system-specific extension for a shared
library, commonly .so or
.sl.)
Notice that we have specified the functions as strict,
meaning that
the system should automatically assume a null result if any input
value is null. By doing this, we avoid having to check for null inputs
in the function code. Without this, we'd have to check for null values
explicitly, by checking for a null pointer for each
pass-by-reference argument. (For pass-by-value arguments, we don't
even have a way to check!)
Although this calling convention is simple to use,
it is not very portable; on some architectures there are problems
with passing data types that are smaller than int this way. Also, there is
no simple way to return a null result, nor to cope with null arguments
in any way other than making the function strict. The version-1
convention, presented next, overcomes these objections.
Calling Conventions Version 1 for C-Language Functions
The version-1 calling convention relies on macros to suppress most
of the complexity of passing arguments and results. The C declaration
of a version-1 function is always
Datum funcname(PG_FUNCTION_ARGS)
In addition, the macro call
PG_FUNCTION_INFO_V1(funcname);
must appear in the same source file. (Conventionally. it's
written just before the function itself.) This macro call is not
needed for internal>-language functions, since
PostgreSQL> assumes that all internal functions
use the version-1 convention. It is, however, required for
dynamically-loaded functions.
In a version-1 function, each actual argument is fetched using a
PG_GETARG_xxx()
macro that corresponds to the argument's data type, and the
result is returned using a
PG_RETURN_xxx()
macro for the return type.
PG_GETARG_xxx()
takes as its argument the number of the function argument to
fetch, where the count starts at 0.
PG_RETURN_xxx()
takes as its argument the actual value to return.
Here we show the same functions as above, coded in version-1 style:
#include "postgres.h"
#include <string.h>
#include "fmgr.h"
/* by value */
PG_FUNCTION_INFO_V1(add_one);
Datum
add_one(PG_FUNCTION_ARGS)
{
int32 arg = PG_GETARG_INT32(0);
PG_RETURN_INT32(arg + 1);
}
/* by reference, fixed length */
PG_FUNCTION_INFO_V1(add_one_float8);
Datum
add_one_float8(PG_FUNCTION_ARGS)
{
/* The macros for FLOAT8 hide its pass-by-reference nature. */
float8 arg = PG_GETARG_FLOAT8(0);
PG_RETURN_FLOAT8(arg + 1.0);
}
PG_FUNCTION_INFO_V1(makepoint);
Datum
makepoint(PG_FUNCTION_ARGS)
{
/* Here, the pass-by-reference nature of Point is not hidden. */
Point *pointx = PG_GETARG_POINT_P(0);
Point *pointy = PG_GETARG_POINT_P(1);
Point *new_point = (Point *) palloc(sizeof(Point));
new_point->x = pointx->x;
new_point->y = pointy->y;
PG_RETURN_POINT_P(new_point);
}
/* by reference, variable length */
PG_FUNCTION_INFO_V1(copytext);
Datum
copytext(PG_FUNCTION_ARGS)
{
text *t = PG_GETARG_TEXT_P(0);
/*
* VARSIZE is the total size of the struct in bytes.
*/
text *new_t = (text *) palloc(VARSIZE(t));
VARATT_SIZEP(new_t) = VARSIZE(t);
/*
* VARDATA is a pointer to the data region of the struct.
*/
memcpy((void *) VARDATA(new_t), /* destination */
(void *) VARDATA(t), /* source */
VARSIZE(t)-VARHDRSZ); /* how many bytes */
PG_RETURN_TEXT_P(new_t);
}
PG_FUNCTION_INFO_V1(concat_text);
Datum
concat_text(PG_FUNCTION_ARGS)
{
text *arg1 = PG_GETARG_TEXT_P(0);
text *arg2 = PG_GETARG_TEXT_P(1);
int32 new_text_size = VARSIZE(arg1) + VARSIZE(arg2) - VARHDRSZ;
text *new_text = (text *) palloc(new_text_size);
VARATT_SIZEP(new_text) = new_text_size;
memcpy(VARDATA(new_text), VARDATA(arg1), VARSIZE(arg1)-VARHDRSZ);
memcpy(VARDATA(new_text) + (VARSIZE(arg1)-VARHDRSZ),
VARDATA(arg2), VARSIZE(arg2)-VARHDRSZ);
PG_RETURN_TEXT_P(new_text);
}
The CREATE FUNCTION commands are the same as
for the version-0 equivalents.
At first glance, the version-1 coding conventions may appear to
be just pointless obscurantism. They do, however, offer a number
of improvements, because the macros can hide unnecessary detail.
An example is that in coding add_one_float8>, we no longer need to
be aware that float8 is a pass-by-reference type. Another
example is that the GETARG> macros for variable-length types allow
for more efficient fetching of toasted (compressed or
out-of-line) values.
One big improvement in version-1 functions is better handling of null
inputs and results. The macro PG_ARGISNULL(n>)
allows a function to test whether each input is null. (Of course, doing
this is only necessary in functions not declared strict>.)
As with the
PG_GETARG_xxx() macros,
the input arguments are counted beginning at zero. Note that one
should refrain from executing
PG_GETARG_xxx() until
one has verified that the argument isn't null.
To return a null result, execute PG_RETURN_NULL();
this works in both strict and nonstrict functions.
Other options provided in the new-style interface are two
variants of the
PG_GETARG_xxx()
macros. The first of these,
PG_GETARG_xxx_COPY(),
guarantees to return a copy of the specified argument that is
safe for writing into. (The normal macros will sometimes return a
pointer to a value that is physically stored in a table, which
must not be written to. Using the
PG_GETARG_xxx_COPY()
macros guarantees a writable result.)
The second variant consists of the
PG_GETARG_xxx_SLICE()
macros which take three arguments. The first is the number of the
function argument (as above). The second and third are the offset and
length of the segment to be returned. Offsets are counted from
zero, and a negative length requests that the remainder of the
value be returned. These macros provide more efficient access to
parts of large values in the case where they have storage type
external. (The storage type of a column can be specified using
ALTER TABLE tablename ALTER
COLUMN colname SET STORAGE
storagetype. storagetype is one of
plain>, external>, extended,
or main>.)
Finally, the version-1 function call conventions make it possible
to return set results () and
implement trigger functions () and
procedural-language call handlers (). Version-1 code is also more
portable than version-0, because it does not break restrictions
on function call protocol in the C standard. For more details
see src/backend/utils/fmgr/README in the
source distribution.
Writing Code
Before we turn to the more advanced topics, we should discuss
some coding rules for PostgreSQL
C-language functions. While it may be possible to load functions
written in languages other than C into
PostgreSQL, this is usually difficult
(when it is possible at all) because other languages, such as
C++, FORTRAN, or Pascal often do not follow the same calling
convention as C. That is, other languages do not pass argument
and return values between functions in the same way. For this
reason, we will assume that your C-language functions are
actually written in C.
The basic rules for writing and building C functions are as follows:
Use pg_config
--includedir-serverpg_config>with user-defined C functions>>
to find out where the PostgreSQL> server header
files are installed on your system (or the system that your
users will be running on). This option is new with
PostgreSQL> 7.2. For
PostgreSQL> 7.1 you should use the option
. (pg_config
will exit with a non-zero status if it encounters an unknown
option.) For releases prior to 7.1 you will have to guess,
but since that was before the current calling conventions were
introduced, it is unlikely that you want to support those
releases.
When allocating memory, use the
PostgreSQL functions
pallocpalloc>> and pfreepfree>>
instead of the corresponding C library functions
malloc and free.
The memory allocated by palloc will be
freed automatically at the end of each transaction, preventing
memory leaks.
Always zero the bytes of your structures using
memset. Without this, it's difficult to
support hash indexes or hash joins, as you must pick out only
the significant bits of your data structure to compute a hash.
Even if you initialize all fields of your structure, there may be
alignment padding (holes in the structure) that may contain
garbage values.
Most of the internal PostgreSQL
types are declared in postgres.h, while
the function manager interfaces
(PG_FUNCTION_ARGS, etc.) are in
fmgr.h, so you will need to include at
least these two files. For portability reasons it's best to
include postgres.hfirst>,
before any other system or user header files. Including
postgres.h will also include
elog.h and palloc.h
for you.
Symbol names defined within object files must not conflict
with each other or with symbols defined in the
PostgreSQL server executable. You
will have to rename your functions or variables if you get
error messages to this effect.
Compiling and linking your code so that it can be dynamically
loaded into PostgreSQL always
requires special flags. See for a
detailed explanation of how to do it for your particular
operating system.
&dfunc;
Extension Building Infrastructurepgxs
If you are thinking about distributing your
PostgreSQL> extension modules, setting up a
portable build system for them can be fairly difficult. Therefore
the PostgreSQL> installation provides a build
infrastructure for extensions, called PGXS, so
that simple extension modules can be built simply against an
already installed server. Note that this infrastructure is not
intended to be a universal build system framework that can be used
to build all software interfacing to PostgreSQL>;
it simply automates common build rules for simple server extension
modules. For more complicated packages, you need to write your
own build system.
To use the infrastructure for your extension, you must write a
simple makefile. In that makefile, you need to set some variables
and finally include the global PGXS makefile.
Here is an example that builds an extension module named
isbn_issn consisting of a shared library, an
SQL script, and a documentation text file:
MODULES = isbn_issn
DATA_built = isbn_issn.sql
DOCS = README.isbn_issn
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
The last two lines should always be the same. Earlier in the
file, you assign variables or add custom
make rules.
The following variables can be set:
MODULES
list of shared objects to be built from source file with same
stem (do not include suffix in this list)
DATA
random files to install into prefix/share/contribDATA_built
random files to install into
prefix/share/contrib,
which need to be built first
DOCS
random files to install under
prefix/doc/contribSCRIPTS
script files (not binaries) to install into
prefix/binSCRIPTS_built
script files (not binaries) to install into
prefix/bin,
which need to be built first
REGRESS
list of regression test cases (without suffix)
or at most one of these two:
PROGRAM
a binary program to build (list objects files in OBJS)
MODULE_big
a shared object to build (list object files in OBJS)
The following can also be set:
EXTRA_CLEAN
extra files to remove in make cleanPG_CPPFLAGS
will be added to CPPFLAGSPG_LIBS
will be added to PROGRAM link line
SHLIB_LINK
will be added to MODULE_big link line
Put this makefile as Makefile in the directory
which holds your extension. Then you can do
make to compile, and later make
install to install your module. The extension is
compiled and installed for the
PostgreSQL installation that
corresponds to the first pg_config command
found in your path.
Composite-Type Arguments in C-Language Functions
Composite types do not have a fixed layout like C structures.
Instances of a composite type may contain null fields. In
addition, composite types that are part of an inheritance
hierarchy may have different fields than other members of the
same inheritance hierarchy. Therefore,
PostgreSQL provides a function
interface for accessing fields of composite types from C.
Suppose we want to write a function to answer the query
SELECT name, c_overpaid(emp, 1500) AS overpaid
FROM emp
WHERE name = 'Bill' OR name = 'Sam';
Using call conventions version 0, we can define
c_overpaid> as:
#include "postgres.h"
#include "executor/executor.h" /* for GetAttributeByName() */
bool
c_overpaid(HeapTupleHeader t, /* the current row of emp */
int32 limit)
{
bool isnull;
int32 salary;
salary = DatumGetInt32(GetAttributeByName(t, "salary", &isnull));
if (isnull)
return false;
return salary > limit;
}
In version-1 coding, the above would look like this:
#include "postgres.h"
#include "executor/executor.h" /* for GetAttributeByName() */
PG_FUNCTION_INFO_V1(c_overpaid);
Datum
c_overpaid(PG_FUNCTION_ARGS)
{
HeapTupleHeader t = PG_GETARG_HEAPTUPLEHEADER(0);
int32 limit = PG_GETARG_INT32(1);
bool isnull;
Datum salary;
salary = GetAttributeByName(t, "salary", &isnull);
if (isnull)
PG_RETURN_BOOL(false);
/* Alternatively, we might prefer to do PG_RETURN_NULL() for null salary. */
PG_RETURN_BOOL(DatumGetInt32(salary) > limit);
}
GetAttributeByName is the
PostgreSQL system function that
returns attributes out of the specified row. It has
three arguments: the argument of type HeapTupleHeader passed
into
the function, the name of the desired attribute, and a
return parameter that tells whether the attribute
is null. GetAttributeByName returns a Datum
value that you can convert to the proper data type by using the
appropriate DatumGetXXX()
macro. Note that the return value is meaningless if the null flag is
set; always check the null flag before trying to do anything with the
result.
There is also GetAttributeByNum, which selects
the target attribute by column number instead of name.
The following command declares the function
c_overpaid in SQL:
CREATE FUNCTION c_overpaid(emp, integer) RETURNS boolean
AS 'DIRECTORY/funcs', 'c_overpaid'
LANGUAGE C STRICT;
Notice we have used STRICT> so that we did not have to
check whether the input arguments were NULL.
Returning Rows (Composite Types) from C-Language Functions
To return a row or composite-type value from a C-language
function, you can use a special API that provides macros and
functions to hide most of the complexity of building composite
data types. To use this API, the source file must include:
#include "funcapi.h"
There are two ways you can build a composite data value (henceforth
a tuple>): you can build it from an array of Datum values,
or from an array of C strings that can be passed to the input
conversion functions of the tuple's column data types. In either
case, you first need to obtain or construct a TupleDesc>
descriptor for the tuple structure. When working with Datums, you
pass the TupleDesc> to BlessTupleDesc>,
and then call heap_form_tuple> for each row. When working
with C strings, you pass the TupleDesc> to
TupleDescGetAttInMetadata>, and then call
BuildTupleFromCStrings> for each row. In the case of a
function returning a set of tuples, the setup steps can all be done
once during the first call of the function.
Several helper functions are available for setting up the needed
TupleDesc>. The recommended way to do this in most
functions returning composite values is to call
TypeFuncClass get_call_result_type(FunctionCallInfo fcinfo,
Oid *resultTypeId,
TupleDesc *resultTupleDesc)
passing the same fcinfo> struct passed to the calling function
itself. (This of course requires that you use the version-1
calling conventions.) resultTypeId> can be specified
as NULL> or as the address of a local variable to receive the
function's result type OID. resultTupleDesc> should be the
address of a local TupleDesc> variable. Check that the
result is TYPEFUNC_COMPOSITE>; if so,
resultTupleDesc> has been filled with the needed
TupleDesc>. (If it is not, you can report an error along
the lines of function returning record called in context that
cannot accept type record.)
get_call_result_type> can resolve the actual type of a
polymorphic function result; so it is useful in functions that return
scalar polymorphic results, not only functions that return composites.
The resultTypeId> output is primarily useful for functions
returning polymorphic scalars.
get_call_result_type> has a sibling
get_expr_result_type>, which can be used to resolve the
expected output type for a function call represented by an expression
tree. This can be used when trying to determine the result type from
outside the function itself. There is also
get_func_result_type>, which can be used when only the
function's OID is available. However these functions are not able
to deal with functions declared to return record>, and
get_func_result_type> cannot resolve polymorphic types,
so you should preferentially use get_call_result_type>.
Older, now-deprecated functions for obtaining
TupleDesc>s are
TupleDesc RelationNameGetTupleDesc(const char *relname)
to get a TupleDesc> for the row type of a named relation,
and
TupleDesc TypeGetTupleDesc(Oid typeoid, List *colaliases)
to get a TupleDesc> based on a type OID. This can
be used to get a TupleDesc> for a base or
composite type. It will not work for a function that returns
record>, however, and it cannot resolve polymorphic
types.
Once you have a TupleDesc>, call
TupleDesc BlessTupleDesc(TupleDesc tupdesc)
if you plan to work with Datums, or
AttInMetadata *TupleDescGetAttInMetadata(TupleDesc tupdesc)
if you plan to work with C strings. If you are writing a function
returning set, you can save the results of these functions in the
FuncCallContext> structure — use the
tuple_desc> or attinmeta> field
respectively.
When working with Datums, use
HeapTuple heap_form_tuple(TupleDesc tupdesc, Datum *values, bool *isnull)
to build a HeapTuple> given user data in Datum form.
When working with C strings, use
HeapTuple BuildTupleFromCStrings(AttInMetadata *attinmeta, char **values)
to build a HeapTuple> given user data
in C string form. values is an array of C strings,
one for each attribute of the return row. Each C string should be in
the form expected by the input function of the attribute data
type. In order to return a null value for one of the attributes,
the corresponding pointer in the values> array
should be set to NULL>. This function will need to
be called again for each row you return.
Once you have built a tuple to return from your function, it
must be converted into a Datum>. Use
HeapTupleGetDatum(HeapTuple tuple)
to convert a HeapTuple> into a valid Datum. This
Datum> can be returned directly if you intend to return
just a single row, or it can be used as the current return value
in a set-returning function.
An example appears in the next section.
Returning Sets from C-Language Functions
There is also a special API that provides support for returning
sets (multiple rows) from a C-language function. A set-returning
function must follow the version-1 calling conventions. Also,
source files must include funcapi.h, as
above.
A set-returning function (SRF>) is called
once for each item it returns. The SRF> must
therefore save enough state to remember what it was doing and
return the next item on each call.
The structure FuncCallContext> is provided to help
control this process. Within a function, fcinfo->flinfo->fn_extra>
is used to hold a pointer to FuncCallContext>
across calls.
typedef struct
{
/*
* Number of times we've been called before
*
* call_cntr is initialized to 0 for you by SRF_FIRSTCALL_INIT(), and
* incremented for you every time SRF_RETURN_NEXT() is called.
*/
uint32 call_cntr;
/*
* OPTIONAL maximum number of calls
*
* max_calls is here for convenience only and setting it is optional.
* If not set, you must provide alternative means to know when the
* function is done.
*/
uint32 max_calls;
/*
* OPTIONAL pointer to result slot
*
* This is obsolete and only present for backwards compatibility, viz,
* user-defined SRFs that use the deprecated TupleDescGetSlot().
*/
TupleTableSlot *slot;
/*
* OPTIONAL pointer to miscellaneous user-provided context information
*
* user_fctx is for use as a pointer to your own data to retain
* arbitrary context information between calls of your function.
*/
void *user_fctx;
/*
* OPTIONAL pointer to struct containing attribute type input metadata
*
* attinmeta is for use when returning tuples (i.e., composite data types)
* and is not used when returning base data types. It is only needed
* if you intend to use BuildTupleFromCStrings() to create the return
* tuple.
*/
AttInMetadata *attinmeta;
/*
* memory context used for structures that must live for multiple calls
*
* multi_call_memory_ctx is set by SRF_FIRSTCALL_INIT() for you, and used
* by SRF_RETURN_DONE() for cleanup. It is the most appropriate memory
* context for any memory that is to be reused across multiple calls
* of the SRF.
*/
MemoryContext multi_call_memory_ctx;
/*
* OPTIONAL pointer to struct containing tuple description
*
* tuple_desc is for use when returning tuples (i.e. composite data types)
* and is only needed if you are going to build the tuples with
* heap_form_tuple() rather than with BuildTupleFromCStrings(). Note that
* the TupleDesc pointer stored here should usually have been run through
* BlessTupleDesc() first.
*/
TupleDesc tuple_desc;
} FuncCallContext;
An SRF> uses several functions and macros that
automatically manipulate the FuncCallContext>
structure (and expect to find it via fn_extra>). Use
SRF_IS_FIRSTCALL()
to determine if your function is being called for the first or a
subsequent time. On the first call (only) use
SRF_FIRSTCALL_INIT()
to initialize the FuncCallContext>. On every function call,
including the first, use
SRF_PERCALL_SETUP()
to properly set up for using the FuncCallContext>
and clearing any previously returned data left over from the
previous pass.
If your function has data to return, use
SRF_RETURN_NEXT(funcctx, result)
to return it to the caller. (result> must be of type
Datum>, either a single value or a tuple prepared as
described above.) Finally, when your function is finished
returning data, use
SRF_RETURN_DONE(funcctx)
to clean up and end the SRF>.
The memory context that is current when the SRF> is called is
a transient context that will be cleared between calls. This means
that you do not need to call pfree> on everything
you allocated using palloc>; it will go away anyway. However, if you want to allocate
any data structures to live across calls, you need to put them somewhere
else. The memory context referenced by
multi_call_memory_ctx> is a suitable location for any
data that needs to survive until the SRF> is finished running. In most
cases, this means that you should switch into
multi_call_memory_ctx> while doing the first-call setup.
A complete pseudo-code example looks like the following:
Datum
my_set_returning_function(PG_FUNCTION_ARGS)
{
FuncCallContext *funcctx;
Datum result;
MemoryContext oldcontext;
further declarations as needed
if (SRF_IS_FIRSTCALL())
{
funcctx = SRF_FIRSTCALL_INIT();
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* One-time setup code appears here: */
user codeif returning compositebuild TupleDesc, and perhaps AttInMetadataendif returning compositeuser code
MemoryContextSwitchTo(oldcontext);
}
/* Each-time setup code appears here: */
user code
funcctx = SRF_PERCALL_SETUP();
user code
/* this is just one way we might test whether we are done: */
if (funcctx->call_cntr < funcctx->max_calls)
{
/* Here we want to return another item: */
user codeobtain result Datum
SRF_RETURN_NEXT(funcctx, result);
}
else
{
/* Here we are done returning items and just need to clean up: */
user code
SRF_RETURN_DONE(funcctx);
}
}
A complete example of a simple SRF> returning a composite type
looks like:
PG_FUNCTION_INFO_V1(retcomposite);
Datum
retcomposite(PG_FUNCTION_ARGS)
{
FuncCallContext *funcctx;
int call_cntr;
int max_calls;
TupleDesc tupdesc;
AttInMetadata *attinmeta;
/* stuff done only on the first call of the function */
if (SRF_IS_FIRSTCALL())
{
MemoryContext oldcontext;
/* create a function context for cross-call persistence */
funcctx = SRF_FIRSTCALL_INIT();
/* switch to memory context appropriate for multiple function calls */
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* total number of tuples to be returned */
funcctx->max_calls = PG_GETARG_UINT32(0);
/* Build a tuple descriptor for our result type */
if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("function returning record called in context "
"that cannot accept type record")));
/*
* generate attribute metadata needed later to produce tuples from raw
* C strings
*/
attinmeta = TupleDescGetAttInMetadata(tupdesc);
funcctx->attinmeta = attinmeta;
MemoryContextSwitchTo(oldcontext);
}
/* stuff done on every call of the function */
funcctx = SRF_PERCALL_SETUP();
call_cntr = funcctx->call_cntr;
max_calls = funcctx->max_calls;
attinmeta = funcctx->attinmeta;
if (call_cntr < max_calls) /* do when there is more left to send */
{
char **values;
HeapTuple tuple;
Datum result;
/*
* Prepare a values array for building the returned tuple.
* This should be an array of C strings which will
* be processed later by the type input functions.
*/
values = (char **) palloc(3 * sizeof(char *));
values[0] = (char *) palloc(16 * sizeof(char));
values[1] = (char *) palloc(16 * sizeof(char));
values[2] = (char *) palloc(16 * sizeof(char));
snprintf(values[0], 16, "%d", 1 * PG_GETARG_INT32(1));
snprintf(values[1], 16, "%d", 2 * PG_GETARG_INT32(1));
snprintf(values[2], 16, "%d", 3 * PG_GETARG_INT32(1));
/* build a tuple */
tuple = BuildTupleFromCStrings(attinmeta, values);
/* make the tuple into a datum */
result = HeapTupleGetDatum(tuple);
/* clean up (this is not really necessary) */
pfree(values[0]);
pfree(values[1]);
pfree(values[2]);
pfree(values);
SRF_RETURN_NEXT(funcctx, result);
}
else /* do when there is no more left */
{
SRF_RETURN_DONE(funcctx);
}
}
One way to declare this function in SQL is:
CREATE TYPE __retcomposite AS (f1 integer, f2 integer, f3 integer);
CREATE OR REPLACE FUNCTION retcomposite(integer, integer)
RETURNS SETOF __retcomposite
AS 'filename>', 'retcomposite'
LANGUAGE C IMMUTABLE STRICT;
A different way is to use OUT parameters:
CREATE OR REPLACE FUNCTION retcomposite(IN integer, IN integer,
OUT f1 integer, OUT f2 integer, OUT f3 integer)
RETURNS SETOF record
AS 'filename>', 'retcomposite'
LANGUAGE C IMMUTABLE STRICT;
Notice that in this method the output type of the function is formally
an anonymous record> type.
The directory contrib/tablefunc> in the source
distribution contains more examples of set-returning functions.
Polymorphic Arguments and Return Types
C-language functions may be declared to accept and
return the polymorphic types
anyelement and anyarray.
See for a more detailed explanation
of polymorphic functions. When function arguments or return types
are defined as polymorphic types, the function author cannot know
in advance what data type it will be called with, or
need to return. There are two routines provided in fmgr.h>
to allow a version-1 C function to discover the actual data types
of its arguments and the type it is expected to return. The routines are
called get_fn_expr_rettype(FmgrInfo *flinfo)> and
get_fn_expr_argtype(FmgrInfo *flinfo, int argnum)>.
They return the result or argument type OID, or InvalidOid if the
information is not available.
The structure flinfo> is normally accessed as
fcinfo->flinfo>. The parameter argnum>
is zero based. get_call_result_type> can also be used
as an alternative to get_fn_expr_rettype>.
For example, suppose we want to write a function to accept a single
element of any type, and return a one-dimensional array of that type:
PG_FUNCTION_INFO_V1(make_array);
Datum
make_array(PG_FUNCTION_ARGS)
{
ArrayType *result;
Oid element_type = get_fn_expr_argtype(fcinfo->flinfo, 0);
Datum element;
bool isnull;
int16 typlen;
bool typbyval;
char typalign;
int ndims;
int dims[MAXDIM];
int lbs[MAXDIM];
if (!OidIsValid(element_type))
elog(ERROR, "could not determine data type of input");
/* get the provided element, being careful in case it's NULL */
isnull = PG_ARGISNULL(0);
if (isnull)
element = (Datum) 0;
else
element = PG_GETARG_DATUM(0);
/* we have one dimension */
ndims = 1;
/* and one element */
dims[0] = 1;
/* and lower bound is 1 */
lbs[0] = 1;
/* get required info about the element type */
get_typlenbyvalalign(element_type, &typlen, &typbyval, &typalign);
/* now build the array */
result = construct_md_array(&element, &isnull, ndims, dims, lbs,
element_type, typlen, typbyval, typalign);
PG_RETURN_ARRAYTYPE_P(result);
}
The following command declares the function
make_array in SQL:
CREATE FUNCTION make_array(anyelement) RETURNS anyarray
AS 'DIRECTORY/funcs', 'make_array'
LANGUAGE C IMMUTABLE;