1997-08-29 11:05:57 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
1999-02-14 00:22:53 +01:00
|
|
|
* spi.h
|
2007-03-16 00:12:07 +01:00
|
|
|
* Server Programming Interface public declarations
|
1997-09-07 07:04:48 +02:00
|
|
|
*
|
2020-01-01 18:21:45 +01:00
|
|
|
* Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group
|
2007-03-16 00:12:07 +01:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/include/executor/spi.h
|
1997-08-29 11:05:57 +02:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
1997-09-07 07:04:48 +02:00
|
|
|
#ifndef SPI_H
|
1997-08-29 11:05:57 +02:00
|
|
|
#define SPI_H
|
|
|
|
|
2017-04-05 01:36:39 +02:00
|
|
|
#include "commands/trigger.h"
|
Prevent leakage of SPI tuple tables during subtransaction abort.
plpgsql often just remembers SPI-result tuple tables in local variables,
and has no mechanism for freeing them if an ereport(ERROR) causes an escape
out of the execution function whose local variable it is. In the original
coding, that wasn't a problem because the tuple table would be cleaned up
when the function's SPI context went away during transaction abort.
However, once plpgsql grew the ability to trap exceptions, repeated
trapping of errors within a function could result in significant
intra-function-call memory leakage, as illustrated in bug #8279 from
Chad Wagner.
We could fix this locally in plpgsql with a bunch of PG_TRY/PG_CATCH
coding, but that would be tedious, probably slow, and prone to bugs of
omission; moreover it would do nothing for similar risks elsewhere.
What seems like a better plan is to make SPI itself responsible for
freeing tuple tables at subtransaction abort. This patch attacks the
problem that way, keeping a list of live tuple tables within each SPI
function context. Currently, such freeing is automatic for tuple tables
made within the failed subtransaction. We might later add a SPI call to
mark a tuple table as not to be freed this way, allowing callers to opt
out; but until someone exhibits a clear use-case for such behavior, it
doesn't seem worth bothering.
A very useful side-effect of this change is that SPI_freetuptable() can
now defend itself against bad calls, such as duplicate free requests;
this should make things more robust in many places. (In particular,
this reduces the risks involved if a third-party extension contains
now-redundant SPI_freetuptable() calls in error cleanup code.)
Even though the leakage problem is of long standing, it seems imprudent
to back-patch this into stable branches, since it does represent an API
semantics change for SPI users. We'll patch this in 9.3, but live with
the leakage in older branches.
2013-07-25 22:45:43 +02:00
|
|
|
#include "lib/ilist.h"
|
2011-09-04 07:13:16 +02:00
|
|
|
#include "nodes/parsenodes.h"
|
2003-03-10 04:53:52 +01:00
|
|
|
#include "utils/portal.h"
|
1997-08-29 11:05:57 +02:00
|
|
|
|
2007-03-16 00:12:07 +01:00
|
|
|
|
|
|
|
typedef struct SPITupleTable
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
2019-07-17 20:55:13 +02:00
|
|
|
/* Public members */
|
|
|
|
TupleDesc tupdesc; /* tuple descriptor */
|
|
|
|
HeapTuple *vals; /* array of tuples */
|
2019-07-18 16:37:13 +02:00
|
|
|
uint64 numvals; /* number of valid tuples */
|
2019-07-17 20:55:13 +02:00
|
|
|
|
|
|
|
/* Private members, not intended for external callers */
|
2019-07-18 16:37:13 +02:00
|
|
|
uint64 alloced; /* allocated length of vals array */
|
2001-05-21 16:22:19 +02:00
|
|
|
MemoryContext tuptabcxt; /* memory context of result table */
|
Prevent leakage of SPI tuple tables during subtransaction abort.
plpgsql often just remembers SPI-result tuple tables in local variables,
and has no mechanism for freeing them if an ereport(ERROR) causes an escape
out of the execution function whose local variable it is. In the original
coding, that wasn't a problem because the tuple table would be cleaned up
when the function's SPI context went away during transaction abort.
However, once plpgsql grew the ability to trap exceptions, repeated
trapping of errors within a function could result in significant
intra-function-call memory leakage, as illustrated in bug #8279 from
Chad Wagner.
We could fix this locally in plpgsql with a bunch of PG_TRY/PG_CATCH
coding, but that would be tedious, probably slow, and prone to bugs of
omission; moreover it would do nothing for similar risks elsewhere.
What seems like a better plan is to make SPI itself responsible for
freeing tuple tables at subtransaction abort. This patch attacks the
problem that way, keeping a list of live tuple tables within each SPI
function context. Currently, such freeing is automatic for tuple tables
made within the failed subtransaction. We might later add a SPI call to
mark a tuple table as not to be freed this way, allowing callers to opt
out; but until someone exhibits a clear use-case for such behavior, it
doesn't seem worth bothering.
A very useful side-effect of this change is that SPI_freetuptable() can
now defend itself against bad calls, such as duplicate free requests;
this should make things more robust in many places. (In particular,
this reduces the risks involved if a third-party extension contains
now-redundant SPI_freetuptable() calls in error cleanup code.)
Even though the leakage problem is of long standing, it seems imprudent
to back-patch this into stable branches, since it does represent an API
semantics change for SPI users. We'll patch this in 9.3, but live with
the leakage in older branches.
2013-07-25 22:45:43 +02:00
|
|
|
slist_node next; /* link for internal bookkeeping */
|
|
|
|
SubTransactionId subid; /* subxact in which tuptable was created */
|
1998-02-26 05:46:47 +01:00
|
|
|
} SPITupleTable;
|
1997-08-29 11:05:57 +02:00
|
|
|
|
2007-03-16 00:12:07 +01:00
|
|
|
/* Plans are opaque structs for standard users of SPI */
|
|
|
|
typedef struct _SPI_plan *SPIPlanPtr;
|
|
|
|
|
2001-08-02 20:08:43 +02:00
|
|
|
#define SPI_ERROR_CONNECT (-1)
|
|
|
|
#define SPI_ERROR_COPY (-2)
|
|
|
|
#define SPI_ERROR_OPUNKNOWN (-3)
|
|
|
|
#define SPI_ERROR_UNCONNECTED (-4)
|
2007-11-15 22:14:46 +01:00
|
|
|
#define SPI_ERROR_CURSOR (-5) /* not used anymore */
|
2001-08-02 20:08:43 +02:00
|
|
|
#define SPI_ERROR_ARGUMENT (-6)
|
|
|
|
#define SPI_ERROR_PARAM (-7)
|
|
|
|
#define SPI_ERROR_TRANSACTION (-8)
|
|
|
|
#define SPI_ERROR_NOATTRIBUTE (-9)
|
|
|
|
#define SPI_ERROR_NOOUTFUNC (-10)
|
|
|
|
#define SPI_ERROR_TYPUNKNOWN (-11)
|
2017-05-17 22:31:56 +02:00
|
|
|
#define SPI_ERROR_REL_DUPLICATE (-12)
|
|
|
|
#define SPI_ERROR_REL_NOT_FOUND (-13)
|
1997-08-29 11:05:57 +02:00
|
|
|
|
1997-09-07 07:04:48 +02:00
|
|
|
#define SPI_OK_CONNECT 1
|
|
|
|
#define SPI_OK_FINISH 2
|
|
|
|
#define SPI_OK_FETCH 3
|
|
|
|
#define SPI_OK_UTILITY 4
|
|
|
|
#define SPI_OK_SELECT 5
|
|
|
|
#define SPI_OK_SELINTO 6
|
|
|
|
#define SPI_OK_INSERT 7
|
|
|
|
#define SPI_OK_DELETE 8
|
|
|
|
#define SPI_OK_UPDATE 9
|
|
|
|
#define SPI_OK_CURSOR 10
|
2006-10-04 02:30:14 +02:00
|
|
|
#define SPI_OK_INSERT_RETURNING 11
|
|
|
|
#define SPI_OK_DELETE_RETURNING 12
|
|
|
|
#define SPI_OK_UPDATE_RETURNING 13
|
2009-01-21 12:02:40 +01:00
|
|
|
#define SPI_OK_REWRITTEN 14
|
2017-04-01 06:17:18 +02:00
|
|
|
#define SPI_OK_REL_REGISTER 15
|
|
|
|
#define SPI_OK_REL_UNREGISTER 16
|
2017-04-05 01:36:39 +02:00
|
|
|
#define SPI_OK_TD_REGISTER 17
|
1997-09-04 15:26:19 +02:00
|
|
|
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
#define SPI_OPT_NONATOMIC (1 << 0)
|
|
|
|
|
Simplify code by getting rid of SPI_push, SPI_pop, SPI_restore_connection.
The idea behind SPI_push was to allow transitioning back into an
"unconnected" state when a SPI-using procedure calls unrelated code that
might or might not invoke SPI. That sounds good, but in practice the only
thing it does for us is to catch cases where a called SPI-using function
forgets to call SPI_connect --- which is a highly improbable failure mode,
since it would be exposed immediately by direct testing of said function.
As against that, we've had multiple bugs induced by forgetting to call
SPI_push/SPI_pop around code that might invoke SPI-using functions; these
are much harder to catch and indeed have gone undetected for years in some
cases. And we've had to band-aid around some problems of this ilk by
introducing conditional push/pop pairs in some places, which really kind
of defeats the purpose altogether; if we can't draw bright lines between
connected and unconnected code, what's the point?
Hence, get rid of SPI_push[_conditional], SPI_pop[_conditional], and the
underlying state variable _SPI_curid. It turns out SPI_restore_connection
can go away too, which is a nice side benefit since it was never more than
a kluge. Provide no-op macros for the deleted functions so as to avoid an
API break for external modules.
A side effect of this removal is that SPI_palloc and allied functions no
longer permit being called when unconnected; they'll throw an error
instead. The apparent usefulness of the previous behavior was a mirage
as well, because it was depended on by only a few places (which I fixed in
preceding commits), and it posed a risk of allocations being unexpectedly
long-lived if someone forgot a SPI_push call.
Discussion: <20808.1478481403@sss.pgh.pa.us>
2016-11-08 23:39:45 +01:00
|
|
|
/* These used to be functions, now just no-ops for backwards compatibility */
|
|
|
|
#define SPI_push() ((void) 0)
|
|
|
|
#define SPI_pop() ((void) 0)
|
|
|
|
#define SPI_push_conditional() false
|
|
|
|
#define SPI_pop_conditional(pushed) ((void) 0)
|
|
|
|
#define SPI_restore_connection() ((void) 0)
|
|
|
|
|
Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
2016-03-12 22:05:10 +01:00
|
|
|
extern PGDLLIMPORT uint64 SPI_processed;
|
2007-07-25 14:22:54 +02:00
|
|
|
extern PGDLLIMPORT SPITupleTable *SPI_tuptable;
|
|
|
|
extern PGDLLIMPORT int SPI_result;
|
1997-08-29 11:05:57 +02:00
|
|
|
|
1997-09-08 04:41:22 +02:00
|
|
|
extern int SPI_connect(void);
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
extern int SPI_connect_ext(int options);
|
1997-09-08 04:41:22 +02:00
|
|
|
extern int SPI_finish(void);
|
2005-05-02 02:37:07 +02:00
|
|
|
extern int SPI_execute(const char *src, bool read_only, long tcount);
|
2019-05-22 19:04:48 +02:00
|
|
|
extern int SPI_execute_plan(SPIPlanPtr plan, Datum *Values, const char *Nulls,
|
|
|
|
bool read_only, long tcount);
|
|
|
|
extern int SPI_execute_plan_with_paramlist(SPIPlanPtr plan,
|
|
|
|
ParamListInfo params,
|
|
|
|
bool read_only, long tcount);
|
Avoid using a cursor in plpgsql's RETURN QUERY statement.
plpgsql has always executed the query given in a RETURN QUERY command
by opening it as a cursor and then fetching a few rows at a time,
which it turns around and dumps into the function's result tuplestore.
The point of this was to keep from blowing out memory with an oversized
SPITupleTable result (note that while a tuplestore can spill tuples
to disk, SPITupleTable cannot). However, it's rather inefficient, both
because of extra data copying and because of executor entry/exit
overhead. In recent versions, a new performance problem has emerged:
use of a cursor prevents use of a parallel plan for the executed query.
We can improve matters by skipping use of a cursor and having the
executor push result tuples directly into the function's result
tuplestore. However, a moderate amount of new infrastructure is needed
to make that idea work:
* We can use the existing tstoreReceiver.c DestReceiver code to funnel
executor output to the tuplestore, but it has to be extended to support
plpgsql's requirement for possibly applying a tuple conversion map.
* SPI needs to be extended to allow use of a caller-supplied
DestReceiver instead of its usual receiver that puts tuples into
a SPITupleTable. Two new API calls are needed to handle both the
RETURN QUERY and RETURN QUERY EXECUTE cases.
I also felt that I didn't want these new API calls to use the legacy
method of specifying query parameter values with "char" null flags
(the old ' '/'n' convention); rather they should accept ParamListInfo
objects containing the parameter type and value info. This required
a bit of additional new infrastructure since we didn't yet have any
parse analysis callback that would interpret $N parameter symbols
according to type data supplied in a ParamListInfo. There seems to be
no harm in letting makeParamList install that callback by default,
rather than leaving a new ParamListInfo's parserSetup hook as NULL.
(Indeed, as of HEAD, I couldn't find anyplace that was using the
parserSetup field at all; plpgsql was using parserSetupArg for its
own purposes, but parserSetup seemed to be write-only.)
We can actually get plpgsql out of the business of using legacy null
flags altogether, and using ParamListInfo instead of its ad-hoc
PreparedParamsData structure; but this requires inventing one more
SPI API call that can replace SPI_cursor_open_with_args. That seems
worth doing, though.
SPI_execute_with_args and SPI_cursor_open_with_args are now unused
anywhere in the core PG distribution. Perhaps someday we could
deprecate/remove them. But cleaning up the crufty bits of the SPI
API is a task for a different patch.
Per bug #16040 from Jeremy Smith. This is unfortunately too invasive to
consider back-patching. Patch by me; thanks to Hamid Akhtar for review.
Discussion: https://postgr.es/m/16040-eaacad11fecfb198@postgresql.org
2020-06-12 18:14:32 +02:00
|
|
|
extern int SPI_execute_plan_with_receiver(SPIPlanPtr plan,
|
|
|
|
ParamListInfo params,
|
|
|
|
bool read_only, long tcount,
|
|
|
|
DestReceiver *dest);
|
2005-05-02 02:37:07 +02:00
|
|
|
extern int SPI_exec(const char *src, long tcount);
|
2019-05-22 19:04:48 +02:00
|
|
|
extern int SPI_execp(SPIPlanPtr plan, Datum *Values, const char *Nulls,
|
|
|
|
long tcount);
|
|
|
|
extern int SPI_execute_snapshot(SPIPlanPtr plan,
|
|
|
|
Datum *Values, const char *Nulls,
|
|
|
|
Snapshot snapshot,
|
|
|
|
Snapshot crosscheck_snapshot,
|
|
|
|
bool read_only, bool fire_triggers, long tcount);
|
|
|
|
extern int SPI_execute_with_args(const char *src,
|
|
|
|
int nargs, Oid *argtypes,
|
|
|
|
Datum *Values, const char *Nulls,
|
|
|
|
bool read_only, long tcount);
|
Avoid using a cursor in plpgsql's RETURN QUERY statement.
plpgsql has always executed the query given in a RETURN QUERY command
by opening it as a cursor and then fetching a few rows at a time,
which it turns around and dumps into the function's result tuplestore.
The point of this was to keep from blowing out memory with an oversized
SPITupleTable result (note that while a tuplestore can spill tuples
to disk, SPITupleTable cannot). However, it's rather inefficient, both
because of extra data copying and because of executor entry/exit
overhead. In recent versions, a new performance problem has emerged:
use of a cursor prevents use of a parallel plan for the executed query.
We can improve matters by skipping use of a cursor and having the
executor push result tuples directly into the function's result
tuplestore. However, a moderate amount of new infrastructure is needed
to make that idea work:
* We can use the existing tstoreReceiver.c DestReceiver code to funnel
executor output to the tuplestore, but it has to be extended to support
plpgsql's requirement for possibly applying a tuple conversion map.
* SPI needs to be extended to allow use of a caller-supplied
DestReceiver instead of its usual receiver that puts tuples into
a SPITupleTable. Two new API calls are needed to handle both the
RETURN QUERY and RETURN QUERY EXECUTE cases.
I also felt that I didn't want these new API calls to use the legacy
method of specifying query parameter values with "char" null flags
(the old ' '/'n' convention); rather they should accept ParamListInfo
objects containing the parameter type and value info. This required
a bit of additional new infrastructure since we didn't yet have any
parse analysis callback that would interpret $N parameter symbols
according to type data supplied in a ParamListInfo. There seems to be
no harm in letting makeParamList install that callback by default,
rather than leaving a new ParamListInfo's parserSetup hook as NULL.
(Indeed, as of HEAD, I couldn't find anyplace that was using the
parserSetup field at all; plpgsql was using parserSetupArg for its
own purposes, but parserSetup seemed to be write-only.)
We can actually get plpgsql out of the business of using legacy null
flags altogether, and using ParamListInfo instead of its ad-hoc
PreparedParamsData structure; but this requires inventing one more
SPI API call that can replace SPI_cursor_open_with_args. That seems
worth doing, though.
SPI_execute_with_args and SPI_cursor_open_with_args are now unused
anywhere in the core PG distribution. Perhaps someday we could
deprecate/remove them. But cleaning up the crufty bits of the SPI
API is a task for a different patch.
Per bug #16040 from Jeremy Smith. This is unfortunately too invasive to
consider back-patching. Patch by me; thanks to Hamid Akhtar for review.
Discussion: https://postgr.es/m/16040-eaacad11fecfb198@postgresql.org
2020-06-12 18:14:32 +02:00
|
|
|
extern int SPI_execute_with_receiver(const char *src,
|
|
|
|
ParamListInfo params,
|
|
|
|
bool read_only, long tcount,
|
|
|
|
DestReceiver *dest);
|
2007-03-16 00:12:07 +01:00
|
|
|
extern SPIPlanPtr SPI_prepare(const char *src, int nargs, Oid *argtypes);
|
2007-04-16 03:14:58 +02:00
|
|
|
extern SPIPlanPtr SPI_prepare_cursor(const char *src, int nargs, Oid *argtypes,
|
2019-05-22 19:04:48 +02:00
|
|
|
int cursorOptions);
|
2009-11-04 23:26:08 +01:00
|
|
|
extern SPIPlanPtr SPI_prepare_params(const char *src,
|
2019-05-22 19:04:48 +02:00
|
|
|
ParserSetupHook parserSetup,
|
|
|
|
void *parserSetupArg,
|
|
|
|
int cursorOptions);
|
2011-09-16 06:42:53 +02:00
|
|
|
extern int SPI_keepplan(SPIPlanPtr plan);
|
2007-03-16 00:12:07 +01:00
|
|
|
extern SPIPlanPtr SPI_saveplan(SPIPlanPtr plan);
|
|
|
|
extern int SPI_freeplan(SPIPlanPtr plan);
|
1997-09-04 15:26:19 +02:00
|
|
|
|
2007-03-16 00:12:07 +01:00
|
|
|
extern Oid SPI_getargtypeid(SPIPlanPtr plan, int argIndex);
|
|
|
|
extern int SPI_getargcount(SPIPlanPtr plan);
|
|
|
|
extern bool SPI_is_cursor_plan(SPIPlanPtr plan);
|
2008-09-16 01:37:40 +02:00
|
|
|
extern bool SPI_plan_is_valid(SPIPlanPtr plan);
|
2004-07-31 22:55:45 +02:00
|
|
|
extern const char *SPI_result_code_string(int code);
|
2004-03-05 01:47:01 +01:00
|
|
|
|
Fix plpgsql's reporting of plan-time errors in possibly-simple expressions.
exec_simple_check_plan and exec_eval_simple_expr attempted to call
GetCachedPlan directly. This meant that if an error was thrown during
planning, the resulting context traceback would not include the line
normally contributed by _SPI_error_callback. This is already inconsistent,
but just to be really odd, a re-execution of the very same expression
*would* show the additional context line, because we'd already have cached
the plan and marked the expression as non-simple.
The problem is easy to demonstrate in 9.2 and HEAD because planning of a
cached plan doesn't occur at all until GetCachedPlan is done. In earlier
versions, it could only be an issue if initial planning had succeeded, then
a replan was forced (already somewhat improbable for a simple expression),
and the replan attempt failed. Since the issue is mainly cosmetic in older
branches anyway, it doesn't seem worth the risk of trying to fix it there.
It is worth fixing in 9.2 since the instability of the context printout can
affect the results of GET STACKED DIAGNOSTICS, as per a recent discussion
on pgsql-novice.
To fix, introduce a SPI function that wraps GetCachedPlan while installing
the correct callback function. Use this instead of calling GetCachedPlan
directly from plpgsql.
Also introduce a wrapper function for extracting a SPI plan's
CachedPlanSource list. This lets us stop including spi_priv.h in
pl_exec.c, which was never a very good idea from a modularity standpoint.
In passing, fix a similar inconsistency that could occur in SPI_cursor_open,
which was also calling GetCachedPlan without setting up a context callback.
2013-01-31 02:02:23 +01:00
|
|
|
extern List *SPI_plan_get_plan_sources(SPIPlanPtr plan);
|
|
|
|
extern CachedPlan *SPI_plan_get_cached_plan(SPIPlanPtr plan);
|
|
|
|
|
1997-09-12 10:37:52 +02:00
|
|
|
extern HeapTuple SPI_copytuple(HeapTuple tuple);
|
2004-04-01 23:28:47 +02:00
|
|
|
extern HeapTupleHeader SPI_returntuple(HeapTuple tuple, TupleDesc tupdesc);
|
1998-09-01 06:40:42 +02:00
|
|
|
extern HeapTuple SPI_modifytuple(Relation rel, HeapTuple tuple, int natts,
|
2019-05-22 19:04:48 +02:00
|
|
|
int *attnum, Datum *Values, const char *Nulls);
|
2002-12-30 23:10:54 +01:00
|
|
|
extern int SPI_fnumber(TupleDesc tupdesc, const char *fname);
|
1997-09-11 09:24:37 +02:00
|
|
|
extern char *SPI_fname(TupleDesc tupdesc, int fnumber);
|
1997-09-08 04:41:22 +02:00
|
|
|
extern char *SPI_getvalue(HeapTuple tuple, TupleDesc tupdesc, int fnumber);
|
1998-02-26 05:46:47 +01:00
|
|
|
extern Datum SPI_getbinval(HeapTuple tuple, TupleDesc tupdesc, int fnumber, bool *isnull);
|
1997-09-08 04:41:22 +02:00
|
|
|
extern char *SPI_gettype(TupleDesc tupdesc, int fnumber);
|
|
|
|
extern Oid SPI_gettypeid(TupleDesc tupdesc, int fnumber);
|
|
|
|
extern char *SPI_getrelname(Relation rel);
|
2005-03-29 04:53:53 +02:00
|
|
|
extern char *SPI_getnspname(Relation rel);
|
1998-02-26 05:46:47 +01:00
|
|
|
extern void *SPI_palloc(Size size);
|
|
|
|
extern void *SPI_repalloc(void *pointer, Size size);
|
|
|
|
extern void SPI_pfree(void *pointer);
|
Support "expanded" objects, particularly arrays, for better performance.
This patch introduces the ability for complex datatypes to have an
in-memory representation that is different from their on-disk format.
On-disk formats are typically optimized for minimal size, and in any case
they can't contain pointers, so they are often not well-suited for
computation. Now a datatype can invent an "expanded" in-memory format
that is better suited for its operations, and then pass that around among
the C functions that operate on the datatype. There are also provisions
(rudimentary as yet) to allow an expanded object to be modified in-place
under suitable conditions, so that operations like assignment to an element
of an array need not involve copying the entire array.
The initial application for this feature is arrays, but it is not hard
to foresee using it for other container types like JSON, XML and hstore.
I have hopes that it will be useful to PostGIS as well.
In this initial implementation, a few heuristics have been hard-wired
into plpgsql to improve performance for arrays that are stored in
plpgsql variables. We would like to generalize those hacks so that
other datatypes can obtain similar improvements, but figuring out some
appropriate APIs is left as a task for future work. (The heuristics
themselves are probably not optimal yet, either, as they sometimes
force expansion of arrays that would be better left alone.)
Preliminary performance testing shows impressive speed gains for plpgsql
functions that do element-by-element access or update of large arrays.
There are other cases that get a little slower, as a result of added array
format conversions; but we can hope to improve anything that's annoyingly
bad. In any case most applications should see a net win.
Tom Lane, reviewed by Andres Freund
2015-05-14 18:08:40 +02:00
|
|
|
extern Datum SPI_datumTransfer(Datum value, bool typByVal, int typLen);
|
1999-12-16 23:20:03 +01:00
|
|
|
extern void SPI_freetuple(HeapTuple pointer);
|
2001-05-21 16:22:19 +02:00
|
|
|
extern void SPI_freetuptable(SPITupleTable *tuptable);
|
|
|
|
|
2007-03-16 00:12:07 +01:00
|
|
|
extern Portal SPI_cursor_open(const char *name, SPIPlanPtr plan,
|
2019-05-22 19:04:48 +02:00
|
|
|
Datum *Values, const char *Nulls, bool read_only);
|
2008-04-01 05:09:30 +02:00
|
|
|
extern Portal SPI_cursor_open_with_args(const char *name,
|
2019-05-22 19:04:48 +02:00
|
|
|
const char *src,
|
|
|
|
int nargs, Oid *argtypes,
|
|
|
|
Datum *Values, const char *Nulls,
|
|
|
|
bool read_only, int cursorOptions);
|
2009-11-04 23:26:08 +01:00
|
|
|
extern Portal SPI_cursor_open_with_paramlist(const char *name, SPIPlanPtr plan,
|
2019-05-22 19:04:48 +02:00
|
|
|
ParamListInfo params, bool read_only);
|
Avoid using a cursor in plpgsql's RETURN QUERY statement.
plpgsql has always executed the query given in a RETURN QUERY command
by opening it as a cursor and then fetching a few rows at a time,
which it turns around and dumps into the function's result tuplestore.
The point of this was to keep from blowing out memory with an oversized
SPITupleTable result (note that while a tuplestore can spill tuples
to disk, SPITupleTable cannot). However, it's rather inefficient, both
because of extra data copying and because of executor entry/exit
overhead. In recent versions, a new performance problem has emerged:
use of a cursor prevents use of a parallel plan for the executed query.
We can improve matters by skipping use of a cursor and having the
executor push result tuples directly into the function's result
tuplestore. However, a moderate amount of new infrastructure is needed
to make that idea work:
* We can use the existing tstoreReceiver.c DestReceiver code to funnel
executor output to the tuplestore, but it has to be extended to support
plpgsql's requirement for possibly applying a tuple conversion map.
* SPI needs to be extended to allow use of a caller-supplied
DestReceiver instead of its usual receiver that puts tuples into
a SPITupleTable. Two new API calls are needed to handle both the
RETURN QUERY and RETURN QUERY EXECUTE cases.
I also felt that I didn't want these new API calls to use the legacy
method of specifying query parameter values with "char" null flags
(the old ' '/'n' convention); rather they should accept ParamListInfo
objects containing the parameter type and value info. This required
a bit of additional new infrastructure since we didn't yet have any
parse analysis callback that would interpret $N parameter symbols
according to type data supplied in a ParamListInfo. There seems to be
no harm in letting makeParamList install that callback by default,
rather than leaving a new ParamListInfo's parserSetup hook as NULL.
(Indeed, as of HEAD, I couldn't find anyplace that was using the
parserSetup field at all; plpgsql was using parserSetupArg for its
own purposes, but parserSetup seemed to be write-only.)
We can actually get plpgsql out of the business of using legacy null
flags altogether, and using ParamListInfo instead of its ad-hoc
PreparedParamsData structure; but this requires inventing one more
SPI API call that can replace SPI_cursor_open_with_args. That seems
worth doing, though.
SPI_execute_with_args and SPI_cursor_open_with_args are now unused
anywhere in the core PG distribution. Perhaps someday we could
deprecate/remove them. But cleaning up the crufty bits of the SPI
API is a task for a different patch.
Per bug #16040 from Jeremy Smith. This is unfortunately too invasive to
consider back-patching. Patch by me; thanks to Hamid Akhtar for review.
Discussion: https://postgr.es/m/16040-eaacad11fecfb198@postgresql.org
2020-06-12 18:14:32 +02:00
|
|
|
extern Portal SPI_cursor_parse_open_with_paramlist(const char *name,
|
|
|
|
const char *src,
|
|
|
|
ParamListInfo params,
|
|
|
|
bool read_only,
|
|
|
|
int cursorOptions);
|
2002-12-30 23:10:54 +01:00
|
|
|
extern Portal SPI_cursor_find(const char *name);
|
2006-09-03 05:19:45 +02:00
|
|
|
extern void SPI_cursor_fetch(Portal portal, bool forward, long count);
|
|
|
|
extern void SPI_cursor_move(Portal portal, bool forward, long count);
|
2007-04-16 03:14:58 +02:00
|
|
|
extern void SPI_scroll_cursor_fetch(Portal, FetchDirection direction, long count);
|
|
|
|
extern void SPI_scroll_cursor_move(Portal, FetchDirection direction, long count);
|
2001-10-25 07:50:21 +02:00
|
|
|
extern void SPI_cursor_close(Portal portal);
|
1997-08-29 11:05:57 +02:00
|
|
|
|
2017-05-17 22:31:56 +02:00
|
|
|
extern int SPI_register_relation(EphemeralNamedRelation enr);
|
|
|
|
extern int SPI_unregister_relation(const char *name);
|
|
|
|
extern int SPI_register_trigger_data(TriggerData *tdata);
|
2017-04-01 06:17:18 +02:00
|
|
|
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
extern void SPI_start_transaction(void);
|
|
|
|
extern void SPI_commit(void);
|
2019-03-24 10:33:14 +01:00
|
|
|
extern void SPI_commit_and_chain(void);
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
extern void SPI_rollback(void);
|
2019-03-24 10:33:14 +01:00
|
|
|
extern void SPI_rollback_and_chain(void);
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
|
2018-05-02 22:50:03 +02:00
|
|
|
extern void SPICleanup(void);
|
2003-12-02 20:26:47 +01:00
|
|
|
extern void AtEOXact_SPI(bool isCommit);
|
2004-09-16 18:58:44 +02:00
|
|
|
extern void AtEOSubXact_SPI(bool isCommit, SubTransactionId mySubid);
|
2018-10-08 22:16:36 +02:00
|
|
|
extern bool SPI_inside_nonatomic_context(void);
|
2001-10-28 07:26:15 +01:00
|
|
|
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
#endif /* SPI_H */
|