2002-04-15 07:22:04 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* portalcmds.c
|
2003-05-02 22:54:36 +02:00
|
|
|
* Utility commands affecting portals (that is, SQL cursor commands)
|
|
|
|
*
|
|
|
|
* Note: see also tcop/pquery.c, which implements portal operations for
|
2014-05-06 18:12:18 +02:00
|
|
|
* the FE/BE protocol. This module uses pquery.c for some operations.
|
2003-05-02 22:54:36 +02:00
|
|
|
* And both modules depend on utils/mmgr/portalmem.c, which controls
|
|
|
|
* storage management for portals (but doesn't run any queries in them).
|
2003-08-04 02:43:34 +02:00
|
|
|
*
|
2002-04-15 07:22:04 +02:00
|
|
|
*
|
2016-01-02 19:33:40 +01:00
|
|
|
* Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
|
2002-04-15 07:22:04 +02:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/backend/commands/portalcmds.c
|
2002-04-15 07:22:04 +02:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "postgres.h"
|
|
|
|
|
2002-11-13 01:44:09 +01:00
|
|
|
#include <limits.h>
|
|
|
|
|
2006-07-13 18:49:20 +02:00
|
|
|
#include "access/xact.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "commands/portalcmds.h"
|
|
|
|
#include "executor/executor.h"
|
2008-11-30 21:51:25 +01:00
|
|
|
#include "executor/tstoreReceiver.h"
|
2003-05-02 22:54:36 +02:00
|
|
|
#include "tcop/pquery.h"
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
#include "utils/memutils.h"
|
2008-03-26 19:48:59 +01:00
|
|
|
#include "utils/snapmgr.h"
|
2003-03-10 04:53:52 +01:00
|
|
|
|
2003-04-29 05:21:30 +02:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
2003-03-10 04:53:52 +01:00
|
|
|
* PerformCursorOpen
|
|
|
|
* Execute SQL DECLARE CURSOR command.
|
2007-04-28 00:05:49 +02:00
|
|
|
*
|
|
|
|
* The query has already been through parse analysis, rewriting, and planning.
|
|
|
|
* When it gets here, it looks like a SELECT PlannedStmt, except that the
|
|
|
|
* utilityStmt field is set.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
|
|
|
void
|
2007-11-15 23:25:18 +01:00
|
|
|
PerformCursorOpen(PlannedStmt *stmt, ParamListInfo params,
|
2007-03-13 01:33:44 +01:00
|
|
|
const char *queryString, bool isTopLevel)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2007-04-28 00:05:49 +02:00
|
|
|
DeclareCursorStmt *cstmt = (DeclareCursorStmt *) stmt->utilityStmt;
|
2003-03-10 04:53:52 +01:00
|
|
|
Portal portal;
|
|
|
|
MemoryContext oldContext;
|
2003-05-02 22:54:36 +02:00
|
|
|
|
2007-04-28 00:05:49 +02:00
|
|
|
if (cstmt == NULL || !IsA(cstmt, DeclareCursorStmt))
|
|
|
|
elog(ERROR, "PerformCursorOpen called for non-cursor query");
|
|
|
|
|
2003-05-02 22:54:36 +02:00
|
|
|
/*
|
|
|
|
* Disallow empty-string cursor name (conflicts with protocol-level
|
|
|
|
* unnamed portal).
|
|
|
|
*/
|
2007-04-28 00:05:49 +02:00
|
|
|
if (!cstmt->portalname || cstmt->portalname[0] == '\0')
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_CURSOR_NAME),
|
|
|
|
errmsg("invalid cursor name: must not be empty")));
|
2003-03-10 04:53:52 +01:00
|
|
|
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* If this is a non-holdable cursor, we require that this statement has
|
|
|
|
* been executed inside a transaction block (or else, it would have no
|
|
|
|
* user-visible effect).
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
*/
|
2007-04-28 00:05:49 +02:00
|
|
|
if (!(cstmt->options & CURSOR_OPT_HOLD))
|
2007-03-13 01:33:44 +01:00
|
|
|
RequireTransactionChain(isTopLevel, "DECLARE CURSOR");
|
2003-03-10 04:53:52 +01:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
Adjust things so that the query_string of a cached plan and the sourceText of
a portal are never NULL, but reliably provide the source text of the query.
It turns out that there was only one place that was really taking a short-cut,
which was the 'EXECUTE' utility statement. That doesn't seem like a
sufficiently critical performance hotspot to justify not offering a guarantee
of validity of the portal source text. Fix it to copy the source text over
from the cached plan. Add Asserts in the places that set up cached plans and
portals to reject null source strings, and simplify a bunch of places that
formerly needed to guard against nulls.
There may be a few places that cons up statements for execution without
having any source text at all; I found one such in ConvertTriggerToFK().
It seems sufficient to inject a phony source string in such a case,
for instance
ProcessUtility((Node *) atstmt,
"(generated ALTER TABLE ADD FOREIGN KEY command)",
NULL, false, None_Receiver, NULL);
We should take a second look at the usage of debug_query_string,
particularly the recently added current_query() SQL function.
ITAGAKI Takahiro and Tom Lane
2008-07-18 22:26:06 +02:00
|
|
|
* Create a portal and copy the plan and queryString into its memory.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
2007-04-28 00:05:49 +02:00
|
|
|
portal = CreatePortal(cstmt->portalname, false, false);
|
2003-03-10 04:53:52 +01:00
|
|
|
|
|
|
|
oldContext = MemoryContextSwitchTo(PortalGetHeapMemory(portal));
|
2003-05-02 22:54:36 +02:00
|
|
|
|
2007-04-28 00:05:49 +02:00
|
|
|
stmt = copyObject(stmt);
|
|
|
|
stmt->utilityStmt = NULL; /* make it look like plain SELECT */
|
2002-04-15 07:22:04 +02:00
|
|
|
|
Adjust things so that the query_string of a cached plan and the sourceText of
a portal are never NULL, but reliably provide the source text of the query.
It turns out that there was only one place that was really taking a short-cut,
which was the 'EXECUTE' utility statement. That doesn't seem like a
sufficiently critical performance hotspot to justify not offering a guarantee
of validity of the portal source text. Fix it to copy the source text over
from the cached plan. Add Asserts in the places that set up cached plans and
portals to reject null source strings, and simplify a bunch of places that
formerly needed to guard against nulls.
There may be a few places that cons up statements for execution without
having any source text at all; I found one such in ConvertTriggerToFK().
It seems sufficient to inject a phony source string in such a case,
for instance
ProcessUtility((Node *) atstmt,
"(generated ALTER TABLE ADD FOREIGN KEY command)",
NULL, false, None_Receiver, NULL);
We should take a second look at the usage of debug_query_string,
particularly the recently added current_query() SQL function.
ITAGAKI Takahiro and Tom Lane
2008-07-18 22:26:06 +02:00
|
|
|
queryString = pstrdup(queryString);
|
2008-04-02 20:31:50 +02:00
|
|
|
|
2003-05-02 22:54:36 +02:00
|
|
|
PortalDefineQuery(portal,
|
2006-08-08 03:23:15 +02:00
|
|
|
NULL,
|
2007-03-13 01:33:44 +01:00
|
|
|
queryString,
|
2003-08-04 02:43:34 +02:00
|
|
|
"SELECT", /* cursor's query is always a SELECT */
|
2007-04-28 00:05:49 +02:00
|
|
|
list_make1(stmt),
|
2007-03-13 01:33:44 +01:00
|
|
|
NULL);
|
2003-05-02 22:54:36 +02:00
|
|
|
|
2007-03-13 01:33:44 +01:00
|
|
|
/*----------
|
2004-08-02 03:30:51 +02:00
|
|
|
* Also copy the outer portal's parameter list into the inner portal's
|
2014-05-06 18:12:18 +02:00
|
|
|
* memory context. We want to pass down the parameter values in case we
|
2007-03-13 01:33:44 +01:00
|
|
|
* had a command like
|
|
|
|
* DECLARE c CURSOR FOR SELECT ... WHERE foo = $1
|
|
|
|
* This will have been parsed using the outer parameter set and the
|
|
|
|
* parameter value needs to be preserved for use when the cursor is
|
|
|
|
* executed.
|
|
|
|
*----------
|
2004-08-02 03:30:51 +02:00
|
|
|
*/
|
|
|
|
params = copyParamList(params);
|
|
|
|
|
2003-05-02 22:54:36 +02:00
|
|
|
MemoryContextSwitchTo(oldContext);
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
2003-05-02 22:54:36 +02:00
|
|
|
* Set up options for portal.
|
|
|
|
*
|
2003-08-04 02:43:34 +02:00
|
|
|
* If the user didn't specify a SCROLL type, allow or disallow scrolling
|
2005-10-15 04:49:52 +02:00
|
|
|
* based on whether it would require any additional runtime overhead to do
|
2014-05-06 18:12:18 +02:00
|
|
|
* so. Also, we disallow scrolling for FOR UPDATE cursors.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
2007-04-28 00:05:49 +02:00
|
|
|
portal->cursorOptions = cstmt->options;
|
2003-05-02 22:54:36 +02:00
|
|
|
if (!(portal->cursorOptions & (CURSOR_OPT_SCROLL | CURSOR_OPT_NO_SCROLL)))
|
|
|
|
{
|
2007-10-25 01:27:08 +02:00
|
|
|
if (stmt->rowMarks == NIL &&
|
|
|
|
ExecSupportsBackwardScan(stmt->planTree))
|
2003-05-02 22:54:36 +02:00
|
|
|
portal->cursorOptions |= CURSOR_OPT_SCROLL;
|
|
|
|
else
|
|
|
|
portal->cursorOptions |= CURSOR_OPT_NO_SCROLL;
|
|
|
|
}
|
2003-03-10 04:53:52 +01:00
|
|
|
|
|
|
|
/*
|
2004-08-02 03:30:51 +02:00
|
|
|
* Start execution, inserting parameters if any.
|
2003-03-10 04:53:52 +01:00
|
|
|
*/
|
2012-11-26 21:55:43 +01:00
|
|
|
PortalStart(portal, params, 0, GetActiveSnapshot());
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2003-05-02 22:54:36 +02:00
|
|
|
Assert(portal->strategy == PORTAL_ONE_SELECT);
|
2003-03-10 04:53:52 +01:00
|
|
|
|
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* We're done; the query won't actually be run until PerformPortalFetch is
|
|
|
|
* called.
|
2003-03-10 04:53:52 +01:00
|
|
|
*/
|
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* PerformPortalFetch
|
2003-03-10 04:53:52 +01:00
|
|
|
* Execute SQL FETCH or MOVE command.
|
2002-04-15 07:22:04 +02:00
|
|
|
*
|
2003-03-11 20:40:24 +01:00
|
|
|
* stmt: parsetree node for command
|
2002-04-15 07:22:04 +02:00
|
|
|
* dest: where to send results
|
|
|
|
* completionTag: points to a buffer of size COMPLETION_TAG_BUFSIZE
|
|
|
|
* in which to store a command completion status string.
|
|
|
|
*
|
|
|
|
* completionTag may be NULL if caller doesn't want a status string.
|
|
|
|
*/
|
|
|
|
void
|
2003-03-11 20:40:24 +01:00
|
|
|
PerformPortalFetch(FetchStmt *stmt,
|
2003-05-06 22:26:28 +02:00
|
|
|
DestReceiver *dest,
|
2002-04-15 07:22:04 +02:00
|
|
|
char *completionTag)
|
|
|
|
{
|
|
|
|
Portal portal;
|
Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
2016-03-12 22:05:10 +01:00
|
|
|
uint64 nprocessed;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2003-05-05 02:44:56 +02:00
|
|
|
/*
|
|
|
|
* Disallow empty-string cursor name (conflicts with protocol-level
|
|
|
|
* unnamed portal).
|
|
|
|
*/
|
|
|
|
if (!stmt->portalname || stmt->portalname[0] == '\0')
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_CURSOR_NAME),
|
|
|
|
errmsg("invalid cursor name: must not be empty")));
|
2003-05-05 02:44:56 +02:00
|
|
|
|
2003-03-10 04:53:52 +01:00
|
|
|
/* get the portal from the portal name */
|
2003-03-11 20:40:24 +01:00
|
|
|
portal = GetPortalByName(stmt->portalname);
|
2002-04-15 07:22:04 +02:00
|
|
|
if (!PortalIsValid(portal))
|
|
|
|
{
|
2003-08-24 23:02:43 +02:00
|
|
|
ereport(ERROR,
|
2003-07-20 23:56:35 +02:00
|
|
|
(errcode(ERRCODE_UNDEFINED_CURSOR),
|
2005-10-15 04:49:52 +02:00
|
|
|
errmsg("cursor \"%s\" does not exist", stmt->portalname)));
|
2004-08-29 07:07:03 +02:00
|
|
|
return; /* keep compiler happy */
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
2005-11-03 18:11:40 +01:00
|
|
|
/* Adjust dest if needed. MOVE wants destination DestNone */
|
2003-05-02 22:54:36 +02:00
|
|
|
if (stmt->ismove)
|
2003-05-08 20:16:37 +02:00
|
|
|
dest = None_Receiver;
|
2003-05-02 22:54:36 +02:00
|
|
|
|
2003-03-10 04:53:52 +01:00
|
|
|
/* Do it */
|
2003-05-02 22:54:36 +02:00
|
|
|
nprocessed = PortalRunFetch(portal,
|
|
|
|
stmt->direction,
|
|
|
|
stmt->howMany,
|
2003-05-08 20:16:37 +02:00
|
|
|
dest);
|
2003-03-10 04:53:52 +01:00
|
|
|
|
|
|
|
/* Return command status if wanted */
|
|
|
|
if (completionTag)
|
Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
2016-03-12 22:05:10 +01:00
|
|
|
snprintf(completionTag, COMPLETION_TAG_BUFSIZE, "%s " UINT64_FORMAT,
|
2003-03-11 20:40:24 +01:00
|
|
|
stmt->ismove ? "MOVE" : "FETCH",
|
2003-03-10 04:53:52 +01:00
|
|
|
nprocessed);
|
|
|
|
}
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
|
|
|
* PerformPortalClose
|
2003-03-10 04:53:52 +01:00
|
|
|
* Close a cursor.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
|
|
|
void
|
2003-05-05 02:44:56 +02:00
|
|
|
PerformPortalClose(const char *name)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
|
|
|
Portal portal;
|
|
|
|
|
2007-04-12 08:53:49 +02:00
|
|
|
/* NULL means CLOSE ALL */
|
|
|
|
if (name == NULL)
|
|
|
|
{
|
|
|
|
PortalHashTableDeleteAll();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2003-05-05 02:44:56 +02:00
|
|
|
/*
|
|
|
|
* Disallow empty-string cursor name (conflicts with protocol-level
|
|
|
|
* unnamed portal).
|
|
|
|
*/
|
2007-04-12 08:53:49 +02:00
|
|
|
if (name[0] == '\0')
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_CURSOR_NAME),
|
|
|
|
errmsg("invalid cursor name: must not be empty")));
|
2003-05-05 02:44:56 +02:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
|
|
|
* get the portal from the portal name
|
|
|
|
*/
|
|
|
|
portal = GetPortalByName(name);
|
|
|
|
if (!PortalIsValid(portal))
|
|
|
|
{
|
2003-08-24 23:02:43 +02:00
|
|
|
ereport(ERROR,
|
2003-07-20 23:56:35 +02:00
|
|
|
(errcode(ERRCODE_UNDEFINED_CURSOR),
|
2003-08-24 23:02:43 +02:00
|
|
|
errmsg("cursor \"%s\" does not exist", name)));
|
2004-08-29 07:07:03 +02:00
|
|
|
return; /* keep compiler happy */
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2012-02-15 22:18:34 +01:00
|
|
|
* Note: PortalCleanup is called as a side-effect, if not already done.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
PortalDrop(portal, false);
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
2003-03-10 04:53:52 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* PortalCleanup
|
|
|
|
*
|
2003-04-29 05:21:30 +02:00
|
|
|
* Clean up a portal when it's dropped. This is the standard cleanup hook
|
|
|
|
* for portals.
|
2012-02-15 22:18:34 +01:00
|
|
|
*
|
|
|
|
* Note: if portal->status is PORTAL_FAILED, we are probably being called
|
|
|
|
* during error abort, and must be careful to avoid doing anything that
|
|
|
|
* is likely to fail again.
|
2003-03-10 04:53:52 +01:00
|
|
|
*/
|
|
|
|
void
|
2004-07-17 05:32:14 +02:00
|
|
|
PortalCleanup(Portal portal)
|
2003-03-10 04:53:52 +01:00
|
|
|
{
|
2003-05-02 22:54:36 +02:00
|
|
|
QueryDesc *queryDesc;
|
|
|
|
|
2003-03-10 04:53:52 +01:00
|
|
|
/*
|
|
|
|
* sanity checks
|
|
|
|
*/
|
|
|
|
AssertArg(PortalIsValid(portal));
|
|
|
|
AssertArg(portal->cleanup == PortalCleanup);
|
|
|
|
|
2003-04-29 05:21:30 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Shut down executor, if still running. We skip this during error abort,
|
|
|
|
* since other mechanisms will take care of releasing executor resources,
|
|
|
|
* and we can't be sure that ExecutorEnd itself wouldn't fail.
|
2003-04-29 05:21:30 +02:00
|
|
|
*/
|
2003-05-02 22:54:36 +02:00
|
|
|
queryDesc = PortalGetQueryDesc(portal);
|
|
|
|
if (queryDesc)
|
2003-04-29 05:21:30 +02:00
|
|
|
{
|
2011-02-27 19:43:29 +01:00
|
|
|
/*
|
2011-04-10 17:42:00 +02:00
|
|
|
* Reset the queryDesc before anything else. This prevents us from
|
|
|
|
* trying to shut down the executor twice, in case of an error below.
|
|
|
|
* The transaction abort mechanisms will take care of resource cleanup
|
|
|
|
* in such a case.
|
2011-02-27 19:43:29 +01:00
|
|
|
*/
|
2003-05-02 22:54:36 +02:00
|
|
|
portal->queryDesc = NULL;
|
2011-02-27 19:43:29 +01:00
|
|
|
|
2004-07-17 05:32:14 +02:00
|
|
|
if (portal->status != PORTAL_FAILED)
|
|
|
|
{
|
|
|
|
ResourceOwner saveResourceOwner;
|
|
|
|
|
|
|
|
/* We must make the portal's resource owner current */
|
|
|
|
saveResourceOwner = CurrentResourceOwner;
|
2004-07-31 02:45:57 +02:00
|
|
|
PG_TRY();
|
|
|
|
{
|
2013-06-13 19:11:29 +02:00
|
|
|
if (portal->resowner)
|
|
|
|
CurrentResourceOwner = portal->resowner;
|
2011-02-27 19:43:29 +01:00
|
|
|
ExecutorFinish(queryDesc);
|
2005-03-25 22:58:00 +01:00
|
|
|
ExecutorEnd(queryDesc);
|
2008-03-20 21:05:56 +01:00
|
|
|
FreeQueryDesc(queryDesc);
|
2004-07-31 02:45:57 +02:00
|
|
|
}
|
|
|
|
PG_CATCH();
|
|
|
|
{
|
|
|
|
/* Ensure CurrentResourceOwner is restored on error */
|
|
|
|
CurrentResourceOwner = saveResourceOwner;
|
|
|
|
PG_RE_THROW();
|
|
|
|
}
|
|
|
|
PG_END_TRY();
|
2004-07-17 05:32:14 +02:00
|
|
|
CurrentResourceOwner = saveResourceOwner;
|
|
|
|
}
|
2003-04-29 05:21:30 +02:00
|
|
|
}
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* PersistHoldablePortal
|
|
|
|
*
|
|
|
|
* Prepare the specified Portal for access outside of the current
|
|
|
|
* transaction. When this function returns, all future accesses to the
|
|
|
|
* portal must be done via the Tuplestore (not by invoking the
|
|
|
|
* executor).
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
PersistHoldablePortal(Portal portal)
|
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
QueryDesc *queryDesc = PortalGetQueryDesc(portal);
|
2004-03-21 23:29:11 +01:00
|
|
|
Portal saveActivePortal;
|
2004-07-17 05:32:14 +02:00
|
|
|
ResourceOwner saveResourceOwner;
|
2003-05-02 22:54:36 +02:00
|
|
|
MemoryContext savePortalContext;
|
2003-04-29 05:21:30 +02:00
|
|
|
MemoryContext oldcxt;
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
|
|
|
|
/*
|
2003-08-04 02:43:34 +02:00
|
|
|
* If we're preserving a holdable portal, we had better be inside the
|
|
|
|
* transaction that originally created it.
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
*/
|
2004-09-16 18:58:44 +02:00
|
|
|
Assert(portal->createSubid != InvalidSubTransactionId);
|
2003-05-02 22:54:36 +02:00
|
|
|
Assert(queryDesc != NULL);
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
|
|
|
|
/*
|
2003-05-06 22:26:28 +02:00
|
|
|
* Caller must have created the tuplestore already.
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
*/
|
2003-04-29 05:21:30 +02:00
|
|
|
Assert(portal->holdContext != NULL);
|
2003-05-06 22:26:28 +02:00
|
|
|
Assert(portal->holdStore != NULL);
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
|
2003-05-02 22:54:36 +02:00
|
|
|
/*
|
2003-05-06 22:26:28 +02:00
|
|
|
* Before closing down the executor, we must copy the tupdesc into
|
|
|
|
* long-term memory, since it was created in executor memory.
|
2003-05-02 22:54:36 +02:00
|
|
|
*/
|
2003-05-06 22:26:28 +02:00
|
|
|
oldcxt = MemoryContextSwitchTo(portal->holdContext);
|
|
|
|
|
2003-05-02 22:54:36 +02:00
|
|
|
portal->tupDesc = CreateTupleDescCopy(portal->tupDesc);
|
|
|
|
|
|
|
|
MemoryContextSwitchTo(oldcxt);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check for improper portal use, and mark portal active.
|
|
|
|
*/
|
Fix subtransaction cleanup after an outer-subtransaction portal fails.
Formerly, we treated only portals created in the current subtransaction as
having failed during subtransaction abort. However, if the error occurred
while running a portal created in an outer subtransaction (ie, a cursor
declared before the last savepoint), that has to be considered broken too.
To allow reliable detection of which ones those are, add a bookkeeping
field to struct Portal that tracks the innermost subtransaction in which
each portal has actually been executed. (Without this, we'd end up
failing portals containing functions that had called the subtransaction,
thereby breaking plpgsql exception blocks completely.)
In addition, when we fail an outer-subtransaction Portal, transfer its
resources into the subtransaction's resource owner, so that they're
released early in cleanup of the subxact. This fixes a problem reported by
Jim Nasby in which a function executed in an outer-subtransaction cursor
could cause an Assert failure or crash by referencing a relation created
within the inner subtransaction.
The proximate cause of the Assert failure is that AtEOSubXact_RelationCache
assumed it could blow away a relcache entry without first checking that the
entry had zero refcount. That was a bad idea on its own terms, so add such
a check there, and to the similar coding in AtEOXact_RelationCache. This
provides an independent safety measure in case there are still ways to
provoke the situation despite the Portal-level changes.
This has been broken since subtransactions were invented, so back-patch
to all supported branches.
Tom Lane and Michael Paquier
2015-09-04 19:36:49 +02:00
|
|
|
MarkPortalActive(portal);
|
2003-05-02 22:54:36 +02:00
|
|
|
|
|
|
|
/*
|
2004-07-31 02:45:57 +02:00
|
|
|
* Set up global portal context pointers.
|
2003-05-02 22:54:36 +02:00
|
|
|
*/
|
2004-03-21 23:29:11 +01:00
|
|
|
saveActivePortal = ActivePortal;
|
2004-07-17 05:32:14 +02:00
|
|
|
saveResourceOwner = CurrentResourceOwner;
|
2003-05-02 22:54:36 +02:00
|
|
|
savePortalContext = PortalContext;
|
2004-07-31 02:45:57 +02:00
|
|
|
PG_TRY();
|
|
|
|
{
|
|
|
|
ActivePortal = portal;
|
2013-06-13 19:11:29 +02:00
|
|
|
if (portal->resowner)
|
|
|
|
CurrentResourceOwner = portal->resowner;
|
2004-07-31 02:45:57 +02:00
|
|
|
PortalContext = PortalGetHeapMemory(portal);
|
|
|
|
|
|
|
|
MemoryContextSwitchTo(PortalContext);
|
|
|
|
|
2008-05-12 22:02:02 +02:00
|
|
|
PushActiveSnapshot(queryDesc->snapshot);
|
|
|
|
|
2004-07-31 02:45:57 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Rewind the executor: we need to store the entire result set in the
|
|
|
|
* tuplestore, so that subsequent backward FETCHs can be processed.
|
2004-07-31 02:45:57 +02:00
|
|
|
*/
|
|
|
|
ExecutorRewind(queryDesc);
|
|
|
|
|
2008-12-01 18:06:21 +01:00
|
|
|
/*
|
2014-05-06 18:12:18 +02:00
|
|
|
* Change the destination to output to the tuplestore. Note we tell
|
2009-06-11 16:49:15 +02:00
|
|
|
* the tuplestore receiver to detoast all data passed through it.
|
2008-12-01 18:06:21 +01:00
|
|
|
*/
|
2008-11-30 21:51:25 +01:00
|
|
|
queryDesc->dest = CreateDestReceiver(DestTuplestore);
|
|
|
|
SetTuplestoreDestReceiverParams(queryDesc->dest,
|
|
|
|
portal->holdStore,
|
2008-12-01 18:06:21 +01:00
|
|
|
portal->holdContext,
|
|
|
|
true);
|
2004-07-31 02:45:57 +02:00
|
|
|
|
|
|
|
/* Fetch the result set into the tuplestore */
|
|
|
|
ExecutorRun(queryDesc, ForwardScanDirection, 0L);
|
|
|
|
|
|
|
|
(*queryDesc->dest->rDestroy) (queryDesc->dest);
|
|
|
|
queryDesc->dest = NULL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now shut down the inner executor.
|
|
|
|
*/
|
2004-08-29 07:07:03 +02:00
|
|
|
portal->queryDesc = NULL; /* prevent double shutdown */
|
2011-02-27 19:43:29 +01:00
|
|
|
ExecutorFinish(queryDesc);
|
2005-03-25 22:58:00 +01:00
|
|
|
ExecutorEnd(queryDesc);
|
2008-03-20 21:05:56 +01:00
|
|
|
FreeQueryDesc(queryDesc);
|
2004-07-31 02:45:57 +02:00
|
|
|
|
|
|
|
/*
|
2014-04-13 19:59:17 +02:00
|
|
|
* Set the position in the result set.
|
2004-07-31 02:45:57 +02:00
|
|
|
*/
|
|
|
|
MemoryContextSwitchTo(portal->holdContext);
|
|
|
|
|
2007-02-06 23:49:24 +01:00
|
|
|
if (portal->atEnd)
|
|
|
|
{
|
2014-04-13 19:59:17 +02:00
|
|
|
/*
|
Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
2016-03-12 22:05:10 +01:00
|
|
|
* Just force the tuplestore forward to its end. The size of the
|
|
|
|
* skip request here is arbitrary.
|
2014-04-13 19:59:17 +02:00
|
|
|
*/
|
|
|
|
while (tuplestore_skiptuples(portal->holdStore, 1000000, true))
|
2007-11-15 22:14:46 +01:00
|
|
|
/* continue */ ;
|
2007-02-06 23:49:24 +01:00
|
|
|
}
|
|
|
|
else
|
2004-07-31 02:45:57 +02:00
|
|
|
{
|
|
|
|
tuplestore_rescan(portal->holdStore);
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
|
2014-04-13 19:59:17 +02:00
|
|
|
if (!tuplestore_skiptuples(portal->holdStore,
|
|
|
|
portal->portalPos,
|
|
|
|
true))
|
|
|
|
elog(ERROR, "unexpected end of tuple stream");
|
2004-07-31 02:45:57 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
PG_CATCH();
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
{
|
2004-07-31 02:45:57 +02:00
|
|
|
/* Uncaught error while executing portal: mark it dead */
|
2012-02-15 22:18:34 +01:00
|
|
|
MarkPortalFailed(portal);
|
2003-04-29 05:21:30 +02:00
|
|
|
|
2004-07-31 02:45:57 +02:00
|
|
|
/* Restore global vars and propagate error */
|
|
|
|
ActivePortal = saveActivePortal;
|
|
|
|
CurrentResourceOwner = saveResourceOwner;
|
|
|
|
PortalContext = savePortalContext;
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
|
2004-07-31 02:45:57 +02:00
|
|
|
PG_RE_THROW();
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
}
|
2004-07-31 02:45:57 +02:00
|
|
|
PG_END_TRY();
|
This patch implements holdable cursors, following the proposal
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
2003-03-27 17:51:29 +01:00
|
|
|
|
2003-04-29 05:21:30 +02:00
|
|
|
MemoryContextSwitchTo(oldcxt);
|
2003-05-02 22:54:36 +02:00
|
|
|
|
2004-07-17 05:32:14 +02:00
|
|
|
/* Mark portal not active */
|
|
|
|
portal->status = PORTAL_READY;
|
|
|
|
|
|
|
|
ActivePortal = saveActivePortal;
|
|
|
|
CurrentResourceOwner = saveResourceOwner;
|
|
|
|
PortalContext = savePortalContext;
|
2004-07-31 02:45:57 +02:00
|
|
|
|
2008-05-12 22:02:02 +02:00
|
|
|
PopActiveSnapshot();
|
|
|
|
|
2004-07-31 02:45:57 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* We can now release any subsidiary memory of the portal's heap context;
|
|
|
|
* we'll never use it again. The executor already dropped its context,
|
|
|
|
* but this will clean up anything that glommed onto the portal's heap via
|
|
|
|
* PortalContext.
|
2004-07-31 02:45:57 +02:00
|
|
|
*/
|
|
|
|
MemoryContextDeleteChildren(PortalGetHeapMemory(portal));
|
2003-03-10 04:53:52 +01:00
|
|
|
}
|