postgresql/src/backend/commands/functioncmds.c

2257 lines
64 KiB
C
Raw Normal View History

/*-------------------------------------------------------------------------
*
* functioncmds.c
*
* Routines for CREATE and DROP FUNCTION commands and CREATE and DROP
* CAST commands.
*
* Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
*
* IDENTIFICATION
2010-09-20 22:08:53 +02:00
* src/backend/commands/functioncmds.c
*
* DESCRIPTION
* These routines take the parse tree and pick out the
* appropriate arguments/flags, and pass the results to the
* corresponding "FooDefine" routines (in src/catalog) that do
* the actual catalog-munging. These routines also verify permission
* of the user to execute the command.
*
* NOTES
* These things must be defined and committed in the following order:
* "create function":
* input/output, recv/send procedures
* "create type":
* type
* "create operator":
* operators
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/genam.h"
#include "access/htup_details.h"
#include "access/sysattr.h"
#include "access/table.h"
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
#include "catalog/catalog.h"
#include "catalog/dependency.h"
#include "catalog/indexing.h"
#include "catalog/objectaccess.h"
#include "catalog/pg_aggregate.h"
#include "catalog/pg_cast.h"
#include "catalog/pg_language.h"
#include "catalog/pg_namespace.h"
#include "catalog/pg_proc.h"
#include "catalog/pg_transform.h"
#include "catalog/pg_type.h"
#include "commands/alter.h"
#include "commands/defrem.h"
Invent "trusted" extensions, and remove the pg_pltemplate catalog. This patch creates a new extension property, "trusted". An extension that's marked that way in its control file can be installed by a non-superuser who has the CREATE privilege on the current database, even if the extension contains objects that normally would have to be created by a superuser. The objects within the extension will (by default) be owned by the bootstrap superuser, but the extension itself will be owned by the calling user. This allows replicating the old behavior around trusted procedural languages, without all the special-case logic in CREATE LANGUAGE. We have, however, chosen to loosen the rules slightly: formerly, only a database owner could take advantage of the special case that allowed installation of a trusted language, but now anyone who has CREATE privilege can do so. Having done that, we can delete the pg_pltemplate catalog, moving the knowledge it contained into the extension script files for the various PLs. This ends up being no change at all for the in-core PLs, but it is a large step forward for external PLs: they can now have the same ease of installation as core PLs do. The old "trusted PL" behavior was only available to PLs that had entries in pg_pltemplate, but now any extension can be marked trusted if appropriate. This also removes one of the stumbling blocks for our Python 2 -> 3 migration, since the association of "plpythonu" with Python 2 is no longer hard-wired into pg_pltemplate's initial contents. Exactly where we go from here on that front remains to be settled, but one problem is fixed. Patch by me, reviewed by Peter Eisentraut, Stephen Frost, and others. Discussion: https://postgr.es/m/5889.1566415762@sss.pgh.pa.us
2020-01-30 00:42:43 +01:00
#include "commands/extension.h"
#include "commands/proclang.h"
#include "executor/execdesc.h"
#include "executor/executor.h"
#include "funcapi.h"
#include "miscadmin.h"
#include "optimizer/optimizer.h"
#include "parser/parse_coerce.h"
#include "parser/parse_collate.h"
#include "parser/parse_expr.h"
#include "parser/parse_func.h"
#include "parser/parse_type.h"
#include "pgstat.h"
#include "utils/acl.h"
#include "utils/builtins.h"
#include "utils/fmgroids.h"
#include "utils/guc.h"
#include "utils/lsyscache.h"
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
#include "utils/memutils.h"
#include "utils/rel.h"
#include "utils/syscache.h"
#include "utils/typcache.h"
/*
* Examine the RETURNS clause of the CREATE FUNCTION statement
* and return information about it as *prorettype_p and *returnsSet.
*
* This is more complex than the average typename lookup because we want to
* allow a shell type to be used, or even created if the specified return type
* doesn't exist yet. (Without this, there's no way to define the I/O procs
* for a new type.) But SQL function creation won't cope, so error out if
* the target language is SQL. (We do this here, not in the SQL-function
* validator, so as not to produce a NOTICE and then an ERROR for the same
* condition.)
*/
static void
compute_return_type(TypeName *returnType, Oid languageOid,
Oid *prorettype_p, bool *returnsSet_p)
{
2002-09-04 22:31:48 +02:00
Oid rettype;
Type typtup;
AclResult aclresult;
typtup = LookupTypeName(NULL, returnType, NULL, false);
if (typtup)
{
if (!((Form_pg_type) GETSTRUCT(typtup))->typisdefined)
{
if (languageOid == SQLlanguageId)
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
2005-10-15 04:49:52 +02:00
errmsg("SQL function cannot return shell type %s",
TypeNameToString(returnType))));
else
ereport(NOTICE,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("return type %s is only a shell",
TypeNameToString(returnType))));
}
rettype = typeTypeId(typtup);
ReleaseSysCache(typtup);
}
else
{
2002-09-04 22:31:48 +02:00
char *typnam = TypeNameToString(returnType);
Oid namespaceId;
AclResult aclresult;
char *typname;
ObjectAddress address;
/*
* Only C-coded functions can be I/O functions. We enforce this
* restriction here mainly to prevent littering the catalogs with
* shell types due to simple typos in user-defined function
* definitions.
*/
if (languageOid != INTERNALlanguageId &&
languageOid != ClanguageId)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg("type \"%s\" does not exist", typnam)));
/* Reject if there's typmod decoration, too */
if (returnType->typmods != NIL)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("type modifier cannot be specified for shell type \"%s\"",
typnam)));
/* Otherwise, go ahead and make a shell type */
ereport(NOTICE,
(errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg("type \"%s\" is not yet defined", typnam),
errdetail("Creating a shell type definition.")));
namespaceId = QualifiedNameGetCreationNamespace(returnType->names,
&typname);
aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(),
ACL_CREATE);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, OBJECT_SCHEMA,
get_namespace_name(namespaceId));
address = TypeShellMake(typname, namespaceId, GetUserId());
rettype = address.objectId;
Assert(OidIsValid(rettype));
}
aclresult = pg_type_aclcheck(rettype, GetUserId(), ACL_USAGE);
if (aclresult != ACLCHECK_OK)
aclcheck_error_type(aclresult, rettype);
*prorettype_p = rettype;
*returnsSet_p = returnType->setof;
}
/*
* Interpret the function parameter list of a CREATE FUNCTION or
* CREATE AGGREGATE statement.
*
* Input parameters:
* parameters: list of FunctionParameter structs
* languageOid: OID of function language (InvalidOid if it's CREATE AGGREGATE)
* objtype: needed only to determine error handling and required result type
*
* Results are stored into output parameters. parameterTypes must always
* be created, but the other arrays are set to NULL if not needed.
Support ordered-set (WITHIN GROUP) aggregates. This patch introduces generic support for ordered-set and hypothetical-set aggregate functions, as well as implementations of the instances defined in SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(), percent_rank(), cume_dist()). We also added mode() though it is not in the spec, as well as versions of percentile_cont() and percentile_disc() that can compute multiple percentile values in one pass over the data. Unlike the original submission, this patch puts full control of the sorting process in the hands of the aggregate's support functions. To allow the support functions to find out how they're supposed to sort, a new API function AggGetAggref() is added to nodeAgg.c. This allows retrieval of the aggregate call's Aggref node, which may have other uses beyond the immediate need. There is also support for ordered-set aggregates to install cleanup callback functions, so that they can be sure that infrastructure such as tuplesort objects gets cleaned up. In passing, make some fixes in the recently-added support for variadic aggregates, and make some editorial adjustments in the recent FILTER additions for aggregates. Also, simplify use of IsBinaryCoercible() by allowing it to succeed whenever the target type is ANY or ANYELEMENT. It was inconsistent that it dealt with other polymorphic target types but not these. Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing, and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
* variadicArgType is set to the variadic array type if there's a VARIADIC
* parameter (there can be only one); or to InvalidOid if not.
* requiredResultType is set to InvalidOid if there are no OUT parameters,
* else it is set to the OID of the implied result type.
*/
void
interpret_function_parameter_list(ParseState *pstate,
List *parameters,
Oid languageOid,
ObjectType objtype,
oidvector **parameterTypes,
ArrayType **allParameterTypes,
ArrayType **parameterModes,
ArrayType **parameterNames,
List **parameterDefaults,
Support ordered-set (WITHIN GROUP) aggregates. This patch introduces generic support for ordered-set and hypothetical-set aggregate functions, as well as implementations of the instances defined in SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(), percent_rank(), cume_dist()). We also added mode() though it is not in the spec, as well as versions of percentile_cont() and percentile_disc() that can compute multiple percentile values in one pass over the data. Unlike the original submission, this patch puts full control of the sorting process in the hands of the aggregate's support functions. To allow the support functions to find out how they're supposed to sort, a new API function AggGetAggref() is added to nodeAgg.c. This allows retrieval of the aggregate call's Aggref node, which may have other uses beyond the immediate need. There is also support for ordered-set aggregates to install cleanup callback functions, so that they can be sure that infrastructure such as tuplesort objects gets cleaned up. In passing, make some fixes in the recently-added support for variadic aggregates, and make some editorial adjustments in the recent FILTER additions for aggregates. Also, simplify use of IsBinaryCoercible() by allowing it to succeed whenever the target type is ANY or ANYELEMENT. It was inconsistent that it dealt with other polymorphic target types but not these. Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing, and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
Oid *variadicArgType,
Oid *requiredResultType)
{
int parameterCount = list_length(parameters);
Oid *sigArgTypes;
int sigArgCount = 0;
Datum *allTypes;
Datum *paramModes;
Datum *paramNames;
int outCount = 0;
int varCount = 0;
bool have_names = false;
bool have_defaults = false;
ListCell *x;
int i;
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
*variadicArgType = InvalidOid; /* default result */
2005-10-15 04:49:52 +02:00
*requiredResultType = InvalidOid; /* default result */
sigArgTypes = (Oid *) palloc(parameterCount * sizeof(Oid));
allTypes = (Datum *) palloc(parameterCount * sizeof(Datum));
paramModes = (Datum *) palloc(parameterCount * sizeof(Datum));
paramNames = (Datum *) palloc0(parameterCount * sizeof(Datum));
*parameterDefaults = NIL;
/* Scan the list and extract data into work arrays */
i = 0;
foreach(x, parameters)
{
FunctionParameter *fp = (FunctionParameter *) lfirst(x);
TypeName *t = fp->argType;
bool isinput = false;
Oid toid;
Type typtup;
AclResult aclresult;
typtup = LookupTypeName(NULL, t, NULL, false);
if (typtup)
{
if (!((Form_pg_type) GETSTRUCT(typtup))->typisdefined)
{
/* As above, hard error if language is SQL */
if (languageOid == SQLlanguageId)
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("SQL function cannot accept shell type %s",
TypeNameToString(t))));
/* We don't allow creating aggregates on shell types either */
else if (objtype == OBJECT_AGGREGATE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("aggregate cannot accept shell type %s",
TypeNameToString(t))));
else
ereport(NOTICE,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("argument type %s is only a shell",
TypeNameToString(t))));
}
toid = typeTypeId(typtup);
ReleaseSysCache(typtup);
}
else
{
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg("type %s does not exist",
TypeNameToString(t))));
toid = InvalidOid; /* keep compiler quiet */
}
aclresult = pg_type_aclcheck(toid, GetUserId(), ACL_USAGE);
if (aclresult != ACLCHECK_OK)
aclcheck_error_type(aclresult, toid);
if (t->setof)
{
if (objtype == OBJECT_AGGREGATE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("aggregates cannot accept set arguments")));
else if (objtype == OBJECT_PROCEDURE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("procedures cannot accept set arguments")));
else
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("functions cannot accept set arguments")));
}
/* handle input parameters */
if (fp->mode != FUNC_PARAM_OUT && fp->mode != FUNC_PARAM_TABLE)
isinput = true;
/* handle signature parameters */
if (fp->mode == FUNC_PARAM_IN || fp->mode == FUNC_PARAM_INOUT ||
(objtype == OBJECT_PROCEDURE && fp->mode == FUNC_PARAM_OUT) ||
fp->mode == FUNC_PARAM_VARIADIC)
{
/* other signature parameters can't follow a VARIADIC parameter */
if (varCount > 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("VARIADIC parameter must be the last signature parameter")));
sigArgTypes[sigArgCount++] = toid;
}
/* handle output parameters */
if (fp->mode != FUNC_PARAM_IN && fp->mode != FUNC_PARAM_VARIADIC)
{
if (objtype == OBJECT_PROCEDURE)
*requiredResultType = RECORDOID;
else if (outCount == 0) /* save first output param's type */
*requiredResultType = toid;
outCount++;
}
if (fp->mode == FUNC_PARAM_VARIADIC)
{
Support ordered-set (WITHIN GROUP) aggregates. This patch introduces generic support for ordered-set and hypothetical-set aggregate functions, as well as implementations of the instances defined in SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(), percent_rank(), cume_dist()). We also added mode() though it is not in the spec, as well as versions of percentile_cont() and percentile_disc() that can compute multiple percentile values in one pass over the data. Unlike the original submission, this patch puts full control of the sorting process in the hands of the aggregate's support functions. To allow the support functions to find out how they're supposed to sort, a new API function AggGetAggref() is added to nodeAgg.c. This allows retrieval of the aggregate call's Aggref node, which may have other uses beyond the immediate need. There is also support for ordered-set aggregates to install cleanup callback functions, so that they can be sure that infrastructure such as tuplesort objects gets cleaned up. In passing, make some fixes in the recently-added support for variadic aggregates, and make some editorial adjustments in the recent FILTER additions for aggregates. Also, simplify use of IsBinaryCoercible() by allowing it to succeed whenever the target type is ANY or ANYELEMENT. It was inconsistent that it dealt with other polymorphic target types but not these. Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing, and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
*variadicArgType = toid;
varCount++;
/* validate variadic parameter type */
switch (toid)
{
case ANYARRAYOID:
case ANYCOMPATIBLEARRAYOID:
case ANYOID:
/* okay */
break;
default:
if (!OidIsValid(get_element_type(toid)))
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("VARIADIC parameter must be an array")));
break;
}
}
allTypes[i] = ObjectIdGetDatum(toid);
paramModes[i] = CharGetDatum(fp->mode);
if (fp->name && fp->name[0])
{
ListCell *px;
/*
* As of Postgres 9.0 we disallow using the same name for two
* input or two output function parameters. Depending on the
* function's language, conflicting input and output names might
* be bad too, but we leave it to the PL to complain if so.
*/
foreach(px, parameters)
{
FunctionParameter *prevfp = (FunctionParameter *) lfirst(px);
if (prevfp == fp)
break;
/* pure in doesn't conflict with pure out */
if ((fp->mode == FUNC_PARAM_IN ||
fp->mode == FUNC_PARAM_VARIADIC) &&
(prevfp->mode == FUNC_PARAM_OUT ||
prevfp->mode == FUNC_PARAM_TABLE))
continue;
if ((prevfp->mode == FUNC_PARAM_IN ||
prevfp->mode == FUNC_PARAM_VARIADIC) &&
(fp->mode == FUNC_PARAM_OUT ||
fp->mode == FUNC_PARAM_TABLE))
continue;
if (prevfp->name && prevfp->name[0] &&
strcmp(prevfp->name, fp->name) == 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("parameter name \"%s\" used more than once",
fp->name)));
}
paramNames[i] = CStringGetTextDatum(fp->name);
have_names = true;
}
if (fp->defexpr)
{
Node *def;
if (!isinput)
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("only input parameters can have default values")));
def = transformExpr(pstate, fp->defexpr,
EXPR_KIND_FUNCTION_DEFAULT);
def = coerce_to_specific_type(pstate, def, toid, "DEFAULT");
assign_expr_collations(pstate, def);
/*
* Make sure no variables are referred to (this is probably dead
* code now that add_missing_from is history).
*/
if (list_length(pstate->p_rtable) != 0 ||
contain_var_clause(def))
ereport(ERROR,
(errcode(ERRCODE_INVALID_COLUMN_REFERENCE),
errmsg("cannot use table references in parameter default value")));
/*
* transformExpr() should have already rejected subqueries,
* aggregates, and window functions, based on the EXPR_KIND_ for a
* default expression.
*
* It can't return a set either --- but coerce_to_specific_type
* already checked that for us.
*
* Note: the point of these restrictions is to ensure that an
* expression that, on its face, hasn't got subplans, aggregates,
* etc cannot suddenly have them after function default arguments
* are inserted.
*/
*parameterDefaults = lappend(*parameterDefaults, def);
have_defaults = true;
}
else
{
if (isinput && have_defaults)
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("input parameters after one with a default value must also have defaults")));
}
i++;
}
/* Now construct the proper outputs as needed */
*parameterTypes = buildoidvector(sigArgTypes, sigArgCount);
if (outCount > 0 || varCount > 0)
{
*allParameterTypes = construct_array(allTypes, parameterCount, OIDOID,
sizeof(Oid), true, TYPALIGN_INT);
*parameterModes = construct_array(paramModes, parameterCount, CHAROID,
1, true, TYPALIGN_CHAR);
if (outCount > 1)
*requiredResultType = RECORDOID;
/* otherwise we set requiredResultType correctly above */
}
else
{
*allParameterTypes = NULL;
*parameterModes = NULL;
}
if (have_names)
{
for (i = 0; i < parameterCount; i++)
{
if (paramNames[i] == PointerGetDatum(NULL))
paramNames[i] = CStringGetTextDatum("");
}
*parameterNames = construct_array(paramNames, parameterCount, TEXTOID,
-1, false, TYPALIGN_INT);
}
else
*parameterNames = NULL;
}
/*
* Recognize one of the options that can be passed to both CREATE
* FUNCTION and ALTER FUNCTION and return it via one of the out
* parameters. Returns true if the passed option was recognized. If
* the out parameter we were going to assign to points to non-NULL,
* raise a duplicate-clause error. (We don't try to detect duplicate
* SET parameters though --- if you're redundant, the last one wins.)
*/
static bool
compute_common_attribute(ParseState *pstate,
bool is_procedure,
DefElem *defel,
DefElem **volatility_item,
DefElem **strict_item,
DefElem **security_item,
DefElem **leakproof_item,
List **set_items,
DefElem **cost_item,
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
DefElem **rows_item,
DefElem **support_item,
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
DefElem **parallel_item)
{
if (strcmp(defel->defname, "volatility") == 0)
{
if (is_procedure)
goto procedure_error;
if (*volatility_item)
goto duplicate_error;
*volatility_item = defel;
}
else if (strcmp(defel->defname, "strict") == 0)
{
if (is_procedure)
goto procedure_error;
if (*strict_item)
goto duplicate_error;
*strict_item = defel;
}
else if (strcmp(defel->defname, "security") == 0)
{
if (*security_item)
goto duplicate_error;
*security_item = defel;
}
else if (strcmp(defel->defname, "leakproof") == 0)
{
if (is_procedure)
goto procedure_error;
if (*leakproof_item)
goto duplicate_error;
*leakproof_item = defel;
}
else if (strcmp(defel->defname, "set") == 0)
{
*set_items = lappend(*set_items, defel->arg);
}
else if (strcmp(defel->defname, "cost") == 0)
{
if (is_procedure)
goto procedure_error;
if (*cost_item)
goto duplicate_error;
*cost_item = defel;
}
else if (strcmp(defel->defname, "rows") == 0)
{
if (is_procedure)
goto procedure_error;
if (*rows_item)
goto duplicate_error;
*rows_item = defel;
}
else if (strcmp(defel->defname, "support") == 0)
{
if (is_procedure)
goto procedure_error;
if (*support_item)
goto duplicate_error;
*support_item = defel;
}
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
else if (strcmp(defel->defname, "parallel") == 0)
{
if (is_procedure)
goto procedure_error;
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
if (*parallel_item)
goto duplicate_error;
*parallel_item = defel;
}
else
return false;
/* Recognized an option */
return true;
duplicate_error:
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("conflicting or redundant options"),
parser_errposition(pstate, defel->location)));
2005-10-15 04:49:52 +02:00
return false; /* keep compiler quiet */
procedure_error:
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("invalid attribute in procedure definition"),
parser_errposition(pstate, defel->location)));
return false;
}
static char
interpret_func_volatility(DefElem *defel)
{
2005-10-15 04:49:52 +02:00
char *str = strVal(defel->arg);
if (strcmp(str, "immutable") == 0)
return PROVOLATILE_IMMUTABLE;
else if (strcmp(str, "stable") == 0)
return PROVOLATILE_STABLE;
else if (strcmp(str, "volatile") == 0)
return PROVOLATILE_VOLATILE;
else
{
elog(ERROR, "invalid volatility \"%s\"", str);
2005-10-15 04:49:52 +02:00
return 0; /* keep compiler quiet */
}
}
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
static char
interpret_func_parallel(DefElem *defel)
{
char *str = strVal(defel->arg);
if (strcmp(str, "safe") == 0)
return PROPARALLEL_SAFE;
else if (strcmp(str, "unsafe") == 0)
return PROPARALLEL_UNSAFE;
else if (strcmp(str, "restricted") == 0)
return PROPARALLEL_RESTRICTED;
else
{
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("parameter \"parallel\" must be SAFE, RESTRICTED, or UNSAFE")));
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
return PROPARALLEL_UNSAFE; /* keep compiler quiet */
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
}
}
/*
* Update a proconfig value according to a list of VariableSetStmt items.
*
* The input and result may be NULL to signify a null entry.
*/
static ArrayType *
update_proconfig_value(ArrayType *a, List *set_items)
{
ListCell *l;
foreach(l, set_items)
{
VariableSetStmt *sstmt = lfirst_node(VariableSetStmt, l);
if (sstmt->kind == VAR_RESET_ALL)
a = NULL;
else
{
char *valuestr = ExtractSetVariableArgs(sstmt);
if (valuestr)
a = GUCArrayAdd(a, sstmt->name, valuestr);
2017-06-21 20:39:04 +02:00
else /* RESET */
a = GUCArrayDelete(a, sstmt->name);
}
}
return a;
}
static Oid
interpret_func_support(DefElem *defel)
{
List *procName = defGetQualifiedName(defel);
Oid procOid;
Oid argList[1];
/*
* Support functions always take one INTERNAL argument and return
* INTERNAL.
*/
argList[0] = INTERNALOID;
procOid = LookupFuncName(procName, 1, argList, true);
if (!OidIsValid(procOid))
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_FUNCTION),
errmsg("function %s does not exist",
func_signature_string(procName, 1, NIL, argList))));
if (get_func_rettype(procOid) != INTERNALOID)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("support function %s must return type %s",
NameListToString(procName), "internal")));
/*
* Someday we might want an ACL check here; but for now, we insist that
* you be superuser to specify a support function, so privilege on the
* support function is moot.
*/
if (!superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to specify a support function")));
return procOid;
}
/*
* Dissect the list of options assembled in gram.y into function
* attributes.
*/
static void
compute_function_attributes(ParseState *pstate,
bool is_procedure,
List *options,
List **as,
char **language,
Node **transform,
bool *windowfunc_p,
char *volatility_p,
bool *strict_p,
bool *security_definer,
bool *leakproof_p,
ArrayType **proconfig,
float4 *procost,
float4 *prorows,
Oid *prosupport,
char *parallel_p)
{
ListCell *option;
2002-09-04 22:31:48 +02:00
DefElem *as_item = NULL;
DefElem *language_item = NULL;
DefElem *transform_item = NULL;
DefElem *windowfunc_item = NULL;
2002-09-04 22:31:48 +02:00
DefElem *volatility_item = NULL;
DefElem *strict_item = NULL;
DefElem *security_item = NULL;
DefElem *leakproof_item = NULL;
List *set_items = NIL;
DefElem *cost_item = NULL;
DefElem *rows_item = NULL;
DefElem *support_item = NULL;
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
DefElem *parallel_item = NULL;
foreach(option, options)
{
DefElem *defel = (DefElem *) lfirst(option);
2002-09-04 22:31:48 +02:00
if (strcmp(defel->defname, "as") == 0)
{
if (as_item)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("conflicting or redundant options"),
parser_errposition(pstate, defel->location)));
as_item = defel;
}
2002-09-04 22:31:48 +02:00
else if (strcmp(defel->defname, "language") == 0)
{
if (language_item)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("conflicting or redundant options"),
parser_errposition(pstate, defel->location)));
language_item = defel;
}
else if (strcmp(defel->defname, "transform") == 0)
{
if (transform_item)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("conflicting or redundant options"),
parser_errposition(pstate, defel->location)));
transform_item = defel;
}
else if (strcmp(defel->defname, "window") == 0)
{
if (windowfunc_item)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("conflicting or redundant options"),
parser_errposition(pstate, defel->location)));
if (is_procedure)
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("invalid attribute in procedure definition"),
parser_errposition(pstate, defel->location)));
windowfunc_item = defel;
}
else if (compute_common_attribute(pstate,
is_procedure,
defel,
&volatility_item,
&strict_item,
&security_item,
&leakproof_item,
&set_items,
&cost_item,
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
&rows_item,
&support_item,
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
&parallel_item))
{
/* recognized common option */
continue;
}
else
elog(ERROR, "option \"%s\" not recognized",
defel->defname);
}
/* process required items */
if (as_item)
2002-09-04 22:31:48 +02:00
*as = (List *) as_item->arg;
else
{
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("no function body specified")));
*as = NIL; /* keep compiler quiet */
}
if (language_item)
*language = strVal(language_item->arg);
else
{
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("no language specified")));
*language = NULL; /* keep compiler quiet */
}
/* process optional items */
if (transform_item)
*transform = transform_item->arg;
if (windowfunc_item)
*windowfunc_p = intVal(windowfunc_item->arg);
if (volatility_item)
*volatility_p = interpret_func_volatility(volatility_item);
if (strict_item)
*strict_p = intVal(strict_item->arg);
if (security_item)
*security_definer = intVal(security_item->arg);
if (leakproof_item)
*leakproof_p = intVal(leakproof_item->arg);
if (set_items)
*proconfig = update_proconfig_value(NULL, set_items);
if (cost_item)
{
*procost = defGetNumeric(cost_item);
if (*procost <= 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("COST must be positive")));
}
if (rows_item)
{
*prorows = defGetNumeric(rows_item);
if (*prorows <= 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("ROWS must be positive")));
}
if (support_item)
*prosupport = interpret_func_support(support_item);
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
if (parallel_item)
*parallel_p = interpret_func_parallel(parallel_item);
}
/*
* For a dynamically linked C language object, the form of the clause is
*
* AS <object file name> [, <link symbol name> ]
*
* In all other cases
*
* AS <object reference, or sql code>
*/
static void
interpret_AS_clause(Oid languageOid, const char *languageName,
char *funcname, List *as,
char **prosrc_str_p, char **probin_str_p)
{
Assert(as != NIL);
if (languageOid == ClanguageId)
{
/*
2005-10-15 04:49:52 +02:00
* For "C" language, store the file name in probin and, when given,
* the link symbol name in prosrc. If link symbol is omitted,
* substitute procedure name. We also allow link symbol to be
* specified as "-", since that was the habit in PG versions before
* 8.4, and there might be dump files out there that don't translate
* that back to "omitted".
*/
*probin_str_p = strVal(linitial(as));
if (list_length(as) == 1)
*prosrc_str_p = funcname;
else
{
*prosrc_str_p = strVal(lsecond(as));
if (strcmp(*prosrc_str_p, "-") == 0)
*prosrc_str_p = funcname;
}
}
else
{
/* Everything else wants the given string in prosrc. */
*prosrc_str_p = strVal(linitial(as));
*probin_str_p = NULL;
if (list_length(as) != 1)
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("only one AS item needed for language \"%s\"",
languageName)));
if (languageOid == INTERNALlanguageId)
{
/*
* In PostgreSQL versions before 6.5, the SQL name of the created
* function could not be different from the internal name, and
* "prosrc" wasn't used. So there is code out there that does
* CREATE FUNCTION xyz AS '' LANGUAGE internal. To preserve some
* modicum of backwards compatibility, accept an empty "prosrc"
* value as meaning the supplied SQL function name.
*/
if (strlen(*prosrc_str_p) == 0)
*prosrc_str_p = funcname;
}
}
}
/*
* CreateFunction
* Execute a CREATE FUNCTION (or CREATE PROCEDURE) utility statement.
*/
ObjectAddress
CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt)
{
char *probin_str;
char *prosrc_str;
Oid prorettype;
bool returnsSet;
char *language;
Oid languageOid;
Oid languageValidator;
Node *transformDefElem = NULL;
char *funcname;
Oid namespaceId;
AclResult aclresult;
oidvector *parameterTypes;
ArrayType *allParameterTypes;
ArrayType *parameterModes;
ArrayType *parameterNames;
List *parameterDefaults;
Support ordered-set (WITHIN GROUP) aggregates. This patch introduces generic support for ordered-set and hypothetical-set aggregate functions, as well as implementations of the instances defined in SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(), percent_rank(), cume_dist()). We also added mode() though it is not in the spec, as well as versions of percentile_cont() and percentile_disc() that can compute multiple percentile values in one pass over the data. Unlike the original submission, this patch puts full control of the sorting process in the hands of the aggregate's support functions. To allow the support functions to find out how they're supposed to sort, a new API function AggGetAggref() is added to nodeAgg.c. This allows retrieval of the aggregate call's Aggref node, which may have other uses beyond the immediate need. There is also support for ordered-set aggregates to install cleanup callback functions, so that they can be sure that infrastructure such as tuplesort objects gets cleaned up. In passing, make some fixes in the recently-added support for variadic aggregates, and make some editorial adjustments in the recent FILTER additions for aggregates. Also, simplify use of IsBinaryCoercible() by allowing it to succeed whenever the target type is ANY or ANYELEMENT. It was inconsistent that it dealt with other polymorphic target types but not these. Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing, and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
Oid variadicArgType;
List *trftypes_list = NIL;
ArrayType *trftypes;
Oid requiredResultType;
bool isWindowFunc,
isStrict,
security,
isLeakProof;
char volatility;
ArrayType *proconfig;
float4 procost;
float4 prorows;
Oid prosupport;
HeapTuple languageTuple;
Form_pg_language languageStruct;
List *as_clause;
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
char parallel;
/* Convert list of names to a name and namespace */
namespaceId = QualifiedNameGetCreationNamespace(stmt->funcname,
&funcname);
/* Check we have creation rights in target namespace */
aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(), ACL_CREATE);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, OBJECT_SCHEMA,
get_namespace_name(namespaceId));
/* Set default attributes */
isWindowFunc = false;
isStrict = false;
security = false;
isLeakProof = false;
volatility = PROVOLATILE_VOLATILE;
proconfig = NULL;
procost = -1; /* indicates not set */
prorows = -1; /* indicates not set */
prosupport = InvalidOid;
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
parallel = PROPARALLEL_UNSAFE;
/* Extract non-default attributes from stmt->options list */
compute_function_attributes(pstate,
stmt->is_procedure,
stmt->options,
&as_clause, &language, &transformDefElem,
&isWindowFunc, &volatility,
&isStrict, &security, &isLeakProof,
&proconfig, &procost, &prorows,
&prosupport, &parallel);
/* Look up the language and validate permissions */
languageTuple = SearchSysCache1(LANGNAME, PointerGetDatum(language));
if (!HeapTupleIsValid(languageTuple))
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg("language \"%s\" does not exist", language),
Invent "trusted" extensions, and remove the pg_pltemplate catalog. This patch creates a new extension property, "trusted". An extension that's marked that way in its control file can be installed by a non-superuser who has the CREATE privilege on the current database, even if the extension contains objects that normally would have to be created by a superuser. The objects within the extension will (by default) be owned by the bootstrap superuser, but the extension itself will be owned by the calling user. This allows replicating the old behavior around trusted procedural languages, without all the special-case logic in CREATE LANGUAGE. We have, however, chosen to loosen the rules slightly: formerly, only a database owner could take advantage of the special case that allowed installation of a trusted language, but now anyone who has CREATE privilege can do so. Having done that, we can delete the pg_pltemplate catalog, moving the knowledge it contained into the extension script files for the various PLs. This ends up being no change at all for the in-core PLs, but it is a large step forward for external PLs: they can now have the same ease of installation as core PLs do. The old "trusted PL" behavior was only available to PLs that had entries in pg_pltemplate, but now any extension can be marked trusted if appropriate. This also removes one of the stumbling blocks for our Python 2 -> 3 migration, since the association of "plpythonu" with Python 2 is no longer hard-wired into pg_pltemplate's initial contents. Exactly where we go from here on that front remains to be settled, but one problem is fixed. Patch by me, reviewed by Peter Eisentraut, Stephen Frost, and others. Discussion: https://postgr.es/m/5889.1566415762@sss.pgh.pa.us
2020-01-30 00:42:43 +01:00
(extension_file_exists(language) ?
errhint("Use CREATE EXTENSION to load the language into the database.") : 0)));
2004-08-29 07:07:03 +02:00
languageStruct = (Form_pg_language) GETSTRUCT(languageTuple);
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
languageOid = languageStruct->oid;
if (languageStruct->lanpltrusted)
{
/* if trusted language, need USAGE privilege */
AclResult aclresult;
aclresult = pg_language_aclcheck(languageOid, GetUserId(), ACL_USAGE);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, OBJECT_LANGUAGE,
NameStr(languageStruct->lanname));
}
else
{
/* if untrusted language, must be superuser */
if (!superuser())
aclcheck_error(ACLCHECK_NO_PRIV, OBJECT_LANGUAGE,
NameStr(languageStruct->lanname));
}
languageValidator = languageStruct->lanvalidator;
ReleaseSysCache(languageTuple);
/*
2015-05-24 03:35:49 +02:00
* Only superuser is allowed to create leakproof functions because
* leakproof functions can see tuples which have not yet been filtered out
* by security barrier views or row level security policies.
*/
if (isLeakProof && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("only superuser can define a leakproof function")));
if (transformDefElem)
{
2015-05-24 03:35:49 +02:00
ListCell *lc;
foreach(lc, castNode(List, transformDefElem))
{
Oid typeid = typenameTypeId(NULL,
lfirst_node(TypeName, lc));
2015-05-24 03:35:49 +02:00
Oid elt = get_base_element_type(typeid);
typeid = elt ? elt : typeid;
get_transform_oid(typeid, languageOid, false);
trftypes_list = lappend_oid(trftypes_list, typeid);
}
}
/*
* Convert remaining parameters of CREATE to form wanted by
* ProcedureCreate.
*/
interpret_function_parameter_list(pstate,
stmt->parameters,
languageOid,
stmt->is_procedure ? OBJECT_PROCEDURE : OBJECT_FUNCTION,
&parameterTypes,
&allParameterTypes,
&parameterModes,
&parameterNames,
&parameterDefaults,
Support ordered-set (WITHIN GROUP) aggregates. This patch introduces generic support for ordered-set and hypothetical-set aggregate functions, as well as implementations of the instances defined in SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(), percent_rank(), cume_dist()). We also added mode() though it is not in the spec, as well as versions of percentile_cont() and percentile_disc() that can compute multiple percentile values in one pass over the data. Unlike the original submission, this patch puts full control of the sorting process in the hands of the aggregate's support functions. To allow the support functions to find out how they're supposed to sort, a new API function AggGetAggref() is added to nodeAgg.c. This allows retrieval of the aggregate call's Aggref node, which may have other uses beyond the immediate need. There is also support for ordered-set aggregates to install cleanup callback functions, so that they can be sure that infrastructure such as tuplesort objects gets cleaned up. In passing, make some fixes in the recently-added support for variadic aggregates, and make some editorial adjustments in the recent FILTER additions for aggregates. Also, simplify use of IsBinaryCoercible() by allowing it to succeed whenever the target type is ANY or ANYELEMENT. It was inconsistent that it dealt with other polymorphic target types but not these. Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing, and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
&variadicArgType,
&requiredResultType);
if (stmt->is_procedure)
{
Assert(!stmt->returnType);
prorettype = requiredResultType ? requiredResultType : VOIDOID;
returnsSet = false;
}
else if (stmt->returnType)
{
/* explicit RETURNS clause */
compute_return_type(stmt->returnType, languageOid,
&prorettype, &returnsSet);
if (OidIsValid(requiredResultType) && prorettype != requiredResultType)
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("function result type must be %s because of OUT parameters",
format_type_be(requiredResultType))));
}
else if (OidIsValid(requiredResultType))
{
/* default RETURNS clause from OUT parameters */
prorettype = requiredResultType;
returnsSet = false;
}
else
{
ereport(ERROR,
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
errmsg("function result type must be specified")));
/* Alternative possibility: default to RETURNS VOID */
prorettype = VOIDOID;
returnsSet = false;
}
if (list_length(trftypes_list) > 0)
{
2015-05-24 03:35:49 +02:00
ListCell *lc;
Datum *arr;
int i;
arr = palloc(list_length(trftypes_list) * sizeof(Datum));
i = 0;
2015-05-24 03:35:49 +02:00
foreach(lc, trftypes_list)
arr[i++] = ObjectIdGetDatum(lfirst_oid(lc));
trftypes = construct_array(arr, list_length(trftypes_list),
OIDOID, sizeof(Oid), true, TYPALIGN_INT);
}
else
{
/* store SQL NULL instead of empty array */
trftypes = NULL;
}
interpret_AS_clause(languageOid, language, funcname, as_clause,
&prosrc_str, &probin_str);
/*
* Set default values for COST and ROWS depending on other parameters;
* reject ROWS if it's not returnsSet. NB: pg_dump knows these default
* values, keep it in sync if you change them.
*/
if (procost < 0)
{
/* SQL and PL-language functions are assumed more expensive */
if (languageOid == INTERNALlanguageId ||
languageOid == ClanguageId)
procost = 1;
else
procost = 100;
}
if (prorows < 0)
{
if (returnsSet)
prorows = 1000;
else
prorows = 0; /* dummy value if not returnsSet */
}
else if (!returnsSet)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("ROWS is not applicable when function does not return a set")));
/*
2005-10-15 04:49:52 +02:00
* And now that we have all the parameters, and know we're permitted to do
* so, go ahead and create the function.
*/
return ProcedureCreate(funcname,
namespaceId,
stmt->replace,
returnsSet,
prorettype,
GetUserId(),
languageOid,
languageValidator,
prosrc_str, /* converted to text later */
probin_str, /* converted to text later */
stmt->is_procedure ? PROKIND_PROCEDURE : (isWindowFunc ? PROKIND_WINDOW : PROKIND_FUNCTION),
security,
isLeakProof,
isStrict,
volatility,
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
parallel,
parameterTypes,
PointerGetDatum(allParameterTypes),
PointerGetDatum(parameterModes),
PointerGetDatum(parameterNames),
parameterDefaults,
PointerGetDatum(trftypes),
PointerGetDatum(proconfig),
prosupport,
procost,
prorows);
}
/*
* Guts of function deletion.
*
* Note: this is also used for aggregate deletion, since the OIDs of
* both functions and aggregates point to pg_proc.
*/
void
RemoveFunctionById(Oid funcOid)
{
Relation relation;
HeapTuple tup;
char prokind;
/*
* Delete the pg_proc tuple.
*/
relation = table_open(ProcedureRelationId, RowExclusiveLock);
tup = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcOid));
2002-09-04 22:31:48 +02:00
if (!HeapTupleIsValid(tup)) /* should not happen */
elog(ERROR, "cache lookup failed for function %u", funcOid);
prokind = ((Form_pg_proc) GETSTRUCT(tup))->prokind;
CatalogTupleDelete(relation, &tup->t_self);
ReleaseSysCache(tup);
table_close(relation, RowExclusiveLock);
/*
* If there's a pg_aggregate tuple, delete that too.
*/
if (prokind == PROKIND_AGGREGATE)
{
relation = table_open(AggregateRelationId, RowExclusiveLock);
tup = SearchSysCache1(AGGFNOID, ObjectIdGetDatum(funcOid));
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
if (!HeapTupleIsValid(tup)) /* should not happen */
elog(ERROR, "cache lookup failed for pg_aggregate tuple for function %u", funcOid);
CatalogTupleDelete(relation, &tup->t_self);
ReleaseSysCache(tup);
table_close(relation, RowExclusiveLock);
}
}
/*
* Implements the ALTER FUNCTION utility command (except for the
* RENAME and OWNER clauses, which are handled as part of the generic
* ALTER framework).
*/
ObjectAddress
AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt)
{
2005-10-15 04:49:52 +02:00
HeapTuple tup;
Oid funcOid;
Form_pg_proc procForm;
bool is_procedure;
2005-10-15 04:49:52 +02:00
Relation rel;
ListCell *l;
DefElem *volatility_item = NULL;
DefElem *strict_item = NULL;
DefElem *security_def_item = NULL;
DefElem *leakproof_item = NULL;
List *set_items = NIL;
DefElem *cost_item = NULL;
DefElem *rows_item = NULL;
DefElem *support_item = NULL;
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
DefElem *parallel_item = NULL;
ObjectAddress address;
rel = table_open(ProcedureRelationId, RowExclusiveLock);
funcOid = LookupFuncWithArgs(stmt->objtype, stmt->func, false);
ObjectAddressSet(address, ProcedureRelationId, funcOid);
tup = SearchSysCacheCopy1(PROCOID, ObjectIdGetDatum(funcOid));
if (!HeapTupleIsValid(tup)) /* should not happen */
elog(ERROR, "cache lookup failed for function %u", funcOid);
procForm = (Form_pg_proc) GETSTRUCT(tup);
/* Permission check: must own function */
if (!pg_proc_ownercheck(funcOid, GetUserId()))
aclcheck_error(ACLCHECK_NOT_OWNER, stmt->objtype,
NameListToString(stmt->func->objname));
if (procForm->prokind == PROKIND_AGGREGATE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("\"%s\" is an aggregate function",
NameListToString(stmt->func->objname))));
is_procedure = (procForm->prokind == PROKIND_PROCEDURE);
/* Examine requested actions. */
2005-10-15 04:49:52 +02:00
foreach(l, stmt->actions)
{
2005-10-15 04:49:52 +02:00
DefElem *defel = (DefElem *) lfirst(l);
if (compute_common_attribute(pstate,
is_procedure,
defel,
&volatility_item,
&strict_item,
&security_def_item,
&leakproof_item,
&set_items,
&cost_item,
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
&rows_item,
&support_item,
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
&parallel_item) == false)
elog(ERROR, "option \"%s\" not recognized", defel->defname);
}
if (volatility_item)
procForm->provolatile = interpret_func_volatility(volatility_item);
if (strict_item)
procForm->proisstrict = intVal(strict_item->arg);
if (security_def_item)
procForm->prosecdef = intVal(security_def_item->arg);
if (leakproof_item)
{
procForm->proleakproof = intVal(leakproof_item->arg);
if (procForm->proleakproof && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("only superuser can define a leakproof function")));
}
if (cost_item)
{
procForm->procost = defGetNumeric(cost_item);
if (procForm->procost <= 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("COST must be positive")));
}
if (rows_item)
{
procForm->prorows = defGetNumeric(rows_item);
if (procForm->prorows <= 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("ROWS must be positive")));
if (!procForm->proretset)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("ROWS is not applicable when function does not return a set")));
}
if (support_item)
{
/* interpret_func_support handles the privilege check */
Oid newsupport = interpret_func_support(support_item);
/* Add or replace dependency on support function */
if (OidIsValid(procForm->prosupport))
changeDependencyFor(ProcedureRelationId, funcOid,
ProcedureRelationId, procForm->prosupport,
newsupport);
else
{
ObjectAddress referenced;
referenced.classId = ProcedureRelationId;
referenced.objectId = newsupport;
referenced.objectSubId = 0;
recordDependencyOn(&address, &referenced, DEPENDENCY_NORMAL);
}
procForm->prosupport = newsupport;
}
if (set_items)
{
Datum datum;
bool isnull;
ArrayType *a;
Datum repl_val[Natts_pg_proc];
bool repl_null[Natts_pg_proc];
bool repl_repl[Natts_pg_proc];
/* extract existing proconfig setting */
datum = SysCacheGetAttr(PROCOID, tup, Anum_pg_proc_proconfig, &isnull);
a = isnull ? NULL : DatumGetArrayTypeP(datum);
/* update according to each SET or RESET item, left to right */
a = update_proconfig_value(a, set_items);
/* update the tuple */
memset(repl_repl, false, sizeof(repl_repl));
repl_repl[Anum_pg_proc_proconfig - 1] = true;
if (a == NULL)
{
repl_val[Anum_pg_proc_proconfig - 1] = (Datum) 0;
repl_null[Anum_pg_proc_proconfig - 1] = true;
}
else
{
repl_val[Anum_pg_proc_proconfig - 1] = PointerGetDatum(a);
repl_null[Anum_pg_proc_proconfig - 1] = false;
}
tup = heap_modify_tuple(tup, RelationGetDescr(rel),
repl_val, repl_null, repl_repl);
}
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
if (parallel_item)
procForm->proparallel = interpret_func_parallel(parallel_item);
/* Do the update */
CatalogTupleUpdate(rel, &tup->t_self, tup);
InvokeObjectPostAlterHook(ProcedureRelationId, funcOid, 0);
table_close(rel, NoLock);
heap_freetuple(tup);
return address;
}
2003-06-27 16:45:32 +02:00
/*
* CREATE CAST
*/
ObjectAddress
CreateCast(CreateCastStmt *stmt)
{
Oid sourcetypeid;
Oid targettypeid;
char sourcetyptype;
char targettyptype;
Oid funcid;
int nargs;
char castcontext;
char castmethod;
HeapTuple tuple;
AclResult aclresult;
ObjectAddress myself;
sourcetypeid = typenameTypeId(NULL, stmt->sourcetype);
targettypeid = typenameTypeId(NULL, stmt->targettype);
sourcetyptype = get_typtype(sourcetypeid);
targettyptype = get_typtype(targettypeid);
/* No pseudo-types allowed */
if (sourcetyptype == TYPTYPE_PSEUDO)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("source data type %s is a pseudo-type",
TypeNameToString(stmt->sourcetype))));
if (targettyptype == TYPTYPE_PSEUDO)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("target data type %s is a pseudo-type",
TypeNameToString(stmt->targettype))));
/* Permission check */
if (!pg_type_ownercheck(sourcetypeid, GetUserId())
&& !pg_type_ownercheck(targettypeid, GetUserId()))
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be owner of type %s or type %s",
format_type_be(sourcetypeid),
format_type_be(targettypeid))));
aclresult = pg_type_aclcheck(sourcetypeid, GetUserId(), ACL_USAGE);
if (aclresult != ACLCHECK_OK)
aclcheck_error_type(aclresult, sourcetypeid);
aclresult = pg_type_aclcheck(targettypeid, GetUserId(), ACL_USAGE);
if (aclresult != ACLCHECK_OK)
aclcheck_error_type(aclresult, targettypeid);
/* Domains are allowed for historical reasons, but we warn */
if (sourcetyptype == TYPTYPE_DOMAIN)
ereport(WARNING,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cast will be ignored because the source data type is a domain")));
else if (targettyptype == TYPTYPE_DOMAIN)
ereport(WARNING,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cast will be ignored because the target data type is a domain")));
/* Determine the cast method */
if (stmt->func != NULL)
castmethod = COERCION_METHOD_FUNCTION;
else if (stmt->inout)
castmethod = COERCION_METHOD_INOUT;
else
castmethod = COERCION_METHOD_BINARY;
if (castmethod == COERCION_METHOD_FUNCTION)
{
Form_pg_proc procstruct;
funcid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->func, false);
tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcid));
if (!HeapTupleIsValid(tuple))
elog(ERROR, "cache lookup failed for function %u", funcid);
procstruct = (Form_pg_proc) GETSTRUCT(tuple);
nargs = procstruct->pronargs;
if (nargs < 1 || nargs > 3)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("cast function must take one to three arguments")));
if (!IsBinaryCoercible(sourcetypeid, procstruct->proargtypes.values[0]))
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("argument of cast function must match or be binary-coercible from source data type")));
if (nargs > 1 && procstruct->proargtypes.values[1] != INT4OID)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("second argument of cast function must be type %s",
"integer")));
if (nargs > 2 && procstruct->proargtypes.values[2] != BOOLOID)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("third argument of cast function must be type %s",
"boolean")));
if (!IsBinaryCoercible(procstruct->prorettype, targettypeid))
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("return data type of cast function must match or be binary-coercible to target data type")));
2003-08-04 02:43:34 +02:00
/*
2005-10-15 04:49:52 +02:00
* Restricting the volatility of a cast function may or may not be a
* good idea in the abstract, but it definitely breaks many old
* user-defined types. Disable this check --- tgl 2/1/03
*/
#ifdef NOT_USED
if (procstruct->provolatile == PROVOLATILE_VOLATILE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("cast function must not be volatile")));
#endif
if (procstruct->prokind != PROKIND_FUNCTION)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("cast function must be a normal function")));
if (procstruct->proretset)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("cast function must not return a set")));
ReleaseSysCache(tuple);
}
else
{
funcid = InvalidOid;
nargs = 0;
}
if (castmethod == COERCION_METHOD_BINARY)
{
2003-08-04 02:43:34 +02:00
int16 typ1len;
int16 typ2len;
bool typ1byval;
bool typ2byval;
char typ1align;
char typ2align;
/*
* Must be superuser to create binary-compatible casts, since
* erroneous casts can easily crash the backend.
*/
if (!superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create a cast WITHOUT FUNCTION")));
/*
* Also, insist that the types match as to size, alignment, and
2005-10-15 04:49:52 +02:00
* pass-by-value attributes; this provides at least a crude check that
* they have similar representations. A pair of types that fail this
* test should certainly not be equated.
*/
get_typlenbyvalalign(sourcetypeid, &typ1len, &typ1byval, &typ1align);
get_typlenbyvalalign(targettypeid, &typ2len, &typ2byval, &typ2align);
if (typ1len != typ2len ||
typ1byval != typ2byval ||
typ1align != typ2align)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("source and target data types are not physically compatible")));
/*
* We know that composite, enum and array types are never binary-
* compatible with each other. They all have OIDs embedded in them.
*
* Theoretically you could build a user-defined base type that is
* binary-compatible with a composite, enum, or array type. But we
* disallow that too, as in practice such a cast is surely a mistake.
* You can always work around that by writing a cast function.
*/
if (sourcetyptype == TYPTYPE_COMPOSITE ||
targettyptype == TYPTYPE_COMPOSITE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("composite data types are not binary-compatible")));
if (sourcetyptype == TYPTYPE_ENUM ||
targettyptype == TYPTYPE_ENUM)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("enum data types are not binary-compatible")));
if (OidIsValid(get_element_type(sourcetypeid)) ||
OidIsValid(get_element_type(targettypeid)))
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("array data types are not binary-compatible")));
Improve handling of domains over arrays. This patch eliminates various bizarre behaviors caused by sloppy thinking about the difference between a domain type and its underlying array type. In particular, the operation of updating one element of such an array has to be considered as yielding a value of the underlying array type, *not* a value of the domain, because there's no assurance that the domain's CHECK constraints are still satisfied. If we're intending to store the result back into a domain column, we have to re-cast to the domain type so that constraints are re-checked. For similar reasons, such a domain can't be blindly matched to an ANYARRAY polymorphic parameter, because the polymorphic function is likely to apply array-ish operations that could invalidate the domain constraints. For the moment, we just forbid such matching. We might later wish to insert an automatic downcast to the underlying array type, but such a change should also change matching of domains to ANYELEMENT for consistency. To ensure that all such logic is rechecked, this patch removes the original hack of setting a domain's pg_type.typelem field to match its base type; the typelem will always be zero instead. In those places where it's really okay to look through the domain type with no other logic changes, use the newly added get_base_element_type function in place of get_element_type. catversion bumped due to change in pg_type contents. Per bug #5717 from Richard Huxton and subsequent discussion.
2010-10-21 22:07:17 +02:00
/*
* We also disallow creating binary-compatibility casts involving
* domains. Casting from a domain to its base type is already
* allowed, and casting the other way ought to go through domain
* coercion to permit constraint checking. Again, if you're intent on
Improve handling of domains over arrays. This patch eliminates various bizarre behaviors caused by sloppy thinking about the difference between a domain type and its underlying array type. In particular, the operation of updating one element of such an array has to be considered as yielding a value of the underlying array type, *not* a value of the domain, because there's no assurance that the domain's CHECK constraints are still satisfied. If we're intending to store the result back into a domain column, we have to re-cast to the domain type so that constraints are re-checked. For similar reasons, such a domain can't be blindly matched to an ANYARRAY polymorphic parameter, because the polymorphic function is likely to apply array-ish operations that could invalidate the domain constraints. For the moment, we just forbid such matching. We might later wish to insert an automatic downcast to the underlying array type, but such a change should also change matching of domains to ANYELEMENT for consistency. To ensure that all such logic is rechecked, this patch removes the original hack of setting a domain's pg_type.typelem field to match its base type; the typelem will always be zero instead. In those places where it's really okay to look through the domain type with no other logic changes, use the newly added get_base_element_type function in place of get_element_type. catversion bumped due to change in pg_type contents. Per bug #5717 from Richard Huxton and subsequent discussion.
2010-10-21 22:07:17 +02:00
* having your own semantics for that, create a no-op cast function.
*
* NOTE: if we were to relax this, the above checks for composites
* etc. would have to be modified to look through domains to their
* base types.
*/
if (sourcetyptype == TYPTYPE_DOMAIN ||
targettyptype == TYPTYPE_DOMAIN)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("domain data types must not be marked binary-compatible")));
}
/*
* Allow source and target types to be same only for length coercion
* functions. We assume a multi-arg function does length coercion.
*/
if (sourcetypeid == targettypeid && nargs < 2)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("source data type and target data type are the same")));
/* convert CoercionContext enum to char value for castcontext */
switch (stmt->context)
{
case COERCION_IMPLICIT:
castcontext = COERCION_CODE_IMPLICIT;
break;
case COERCION_ASSIGNMENT:
castcontext = COERCION_CODE_ASSIGNMENT;
break;
/* COERCION_PLPGSQL is intentionally not covered here */
case COERCION_EXPLICIT:
castcontext = COERCION_CODE_EXPLICIT;
break;
default:
elog(ERROR, "unrecognized CoercionContext: %d", stmt->context);
castcontext = 0; /* keep compiler quiet */
break;
}
myself = CastCreate(sourcetypeid, targettypeid, funcid, castcontext,
castmethod, DEPENDENCY_NORMAL);
return myself;
}
static void
check_transform_function(Form_pg_proc procstruct)
{
if (procstruct->provolatile == PROVOLATILE_VOLATILE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("transform function must not be volatile")));
if (procstruct->prokind != PROKIND_FUNCTION)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("transform function must be a normal function")));
if (procstruct->proretset)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("transform function must not return a set")));
if (procstruct->pronargs != 1)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("transform function must take one argument")));
if (procstruct->proargtypes.values[0] != INTERNALOID)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("first argument of transform function must be type %s",
"internal")));
}
/*
* CREATE TRANSFORM
*/
ObjectAddress
CreateTransform(CreateTransformStmt *stmt)
{
Oid typeid;
char typtype;
Oid langid;
Oid fromsqlfuncid;
Oid tosqlfuncid;
AclResult aclresult;
Form_pg_proc procstruct;
Datum values[Natts_pg_transform];
bool nulls[Natts_pg_transform];
bool replaces[Natts_pg_transform];
Oid transformid;
HeapTuple tuple;
HeapTuple newtuple;
Relation relation;
ObjectAddress myself,
referenced;
ObjectAddresses *addrs;
bool is_replace;
/*
* Get the type
*/
typeid = typenameTypeId(NULL, stmt->type_name);
typtype = get_typtype(typeid);
if (typtype == TYPTYPE_PSEUDO)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("data type %s is a pseudo-type",
TypeNameToString(stmt->type_name))));
if (typtype == TYPTYPE_DOMAIN)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("data type %s is a domain",
TypeNameToString(stmt->type_name))));
if (!pg_type_ownercheck(typeid, GetUserId()))
aclcheck_error_type(ACLCHECK_NOT_OWNER, typeid);
aclresult = pg_type_aclcheck(typeid, GetUserId(), ACL_USAGE);
if (aclresult != ACLCHECK_OK)
aclcheck_error_type(aclresult, typeid);
/*
* Get the language
*/
langid = get_language_oid(stmt->lang, false);
aclresult = pg_language_aclcheck(langid, GetUserId(), ACL_USAGE);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, OBJECT_LANGUAGE, stmt->lang);
/*
* Get the functions
*/
if (stmt->fromsql)
{
fromsqlfuncid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->fromsql, false);
if (!pg_proc_ownercheck(fromsqlfuncid, GetUserId()))
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(stmt->fromsql->objname));
aclresult = pg_proc_aclcheck(fromsqlfuncid, GetUserId(), ACL_EXECUTE);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, OBJECT_FUNCTION, NameListToString(stmt->fromsql->objname));
tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(fromsqlfuncid));
if (!HeapTupleIsValid(tuple))
elog(ERROR, "cache lookup failed for function %u", fromsqlfuncid);
procstruct = (Form_pg_proc) GETSTRUCT(tuple);
if (procstruct->prorettype != INTERNALOID)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("return data type of FROM SQL function must be %s",
"internal")));
check_transform_function(procstruct);
ReleaseSysCache(tuple);
}
else
fromsqlfuncid = InvalidOid;
if (stmt->tosql)
{
tosqlfuncid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->tosql, false);
if (!pg_proc_ownercheck(tosqlfuncid, GetUserId()))
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(stmt->tosql->objname));
aclresult = pg_proc_aclcheck(tosqlfuncid, GetUserId(), ACL_EXECUTE);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, OBJECT_FUNCTION, NameListToString(stmt->tosql->objname));
tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(tosqlfuncid));
if (!HeapTupleIsValid(tuple))
elog(ERROR, "cache lookup failed for function %u", tosqlfuncid);
procstruct = (Form_pg_proc) GETSTRUCT(tuple);
if (procstruct->prorettype != typeid)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("return data type of TO SQL function must be the transform data type")));
check_transform_function(procstruct);
ReleaseSysCache(tuple);
}
else
tosqlfuncid = InvalidOid;
/*
* Ready to go
*/
values[Anum_pg_transform_trftype - 1] = ObjectIdGetDatum(typeid);
values[Anum_pg_transform_trflang - 1] = ObjectIdGetDatum(langid);
values[Anum_pg_transform_trffromsql - 1] = ObjectIdGetDatum(fromsqlfuncid);
values[Anum_pg_transform_trftosql - 1] = ObjectIdGetDatum(tosqlfuncid);
MemSet(nulls, false, sizeof(nulls));
relation = table_open(TransformRelationId, RowExclusiveLock);
tuple = SearchSysCache2(TRFTYPELANG,
ObjectIdGetDatum(typeid),
ObjectIdGetDatum(langid));
if (HeapTupleIsValid(tuple))
{
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
Form_pg_transform form = (Form_pg_transform) GETSTRUCT(tuple);
if (!stmt->replace)
ereport(ERROR,
(errcode(ERRCODE_DUPLICATE_OBJECT),
errmsg("transform for type %s language \"%s\" already exists",
format_type_be(typeid),
stmt->lang)));
MemSet(replaces, false, sizeof(replaces));
replaces[Anum_pg_transform_trffromsql - 1] = true;
replaces[Anum_pg_transform_trftosql - 1] = true;
newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values, nulls, replaces);
CatalogTupleUpdate(relation, &newtuple->t_self, newtuple);
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
transformid = form->oid;
ReleaseSysCache(tuple);
is_replace = true;
}
else
{
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
transformid = GetNewOidWithIndex(relation, TransformOidIndexId,
Anum_pg_transform_oid);
values[Anum_pg_transform_oid - 1] = ObjectIdGetDatum(transformid);
newtuple = heap_form_tuple(RelationGetDescr(relation), values, nulls);
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
CatalogTupleInsert(relation, newtuple);
is_replace = false;
}
if (is_replace)
deleteDependencyRecordsFor(TransformRelationId, transformid, true);
addrs = new_object_addresses();
/* make dependency entries */
ObjectAddressSet(myself, TransformRelationId, transformid);
/* dependency on language */
ObjectAddressSet(referenced, LanguageRelationId, langid);
add_exact_object_address(&referenced, addrs);
/* dependency on type */
ObjectAddressSet(referenced, TypeRelationId, typeid);
add_exact_object_address(&referenced, addrs);
/* dependencies on functions */
if (OidIsValid(fromsqlfuncid))
{
ObjectAddressSet(referenced, ProcedureRelationId, fromsqlfuncid);
add_exact_object_address(&referenced, addrs);
}
if (OidIsValid(tosqlfuncid))
{
ObjectAddressSet(referenced, ProcedureRelationId, tosqlfuncid);
add_exact_object_address(&referenced, addrs);
}
record_object_address_dependencies(&myself, addrs, DEPENDENCY_NORMAL);
free_object_addresses(addrs);
/* dependency on extension */
recordDependencyOnCurrentExtension(&myself, is_replace);
/* Post creation hook for new transform */
InvokeObjectPostCreateHook(TransformRelationId, transformid, 0);
heap_freetuple(newtuple);
table_close(relation, RowExclusiveLock);
return myself;
}
/*
* get_transform_oid - given type OID and language OID, look up a transform OID
*
* If missing_ok is false, throw an error if the transform is not found. If
* true, just return InvalidOid.
*/
Oid
get_transform_oid(Oid type_id, Oid lang_id, bool missing_ok)
{
Oid oid;
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
oid = GetSysCacheOid2(TRFTYPELANG, Anum_pg_transform_oid,
ObjectIdGetDatum(type_id),
ObjectIdGetDatum(lang_id));
if (!OidIsValid(oid) && !missing_ok)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg("transform for type %s language \"%s\" does not exist",
format_type_be(type_id),
get_language_name(lang_id, false))));
return oid;
}
/*
* Subroutine for ALTER FUNCTION/AGGREGATE SET SCHEMA/RENAME
*
* Is there a function with the given name and signature already in the given
* namespace? If so, raise an appropriate error message.
*/
void
IsThereFunctionInNamespace(const char *proname, int pronargs,
oidvector *proargtypes, Oid nspOid)
{
/* check for duplicate name (more friendly than unique-index failure) */
if (SearchSysCacheExists3(PROCNAMEARGSNSP,
CStringGetDatum(proname),
PointerGetDatum(proargtypes),
ObjectIdGetDatum(nspOid)))
ereport(ERROR,
(errcode(ERRCODE_DUPLICATE_FUNCTION),
errmsg("function %s already exists in schema \"%s\"",
funcname_signature_string(proname, pronargs,
NIL, proargtypes->values),
get_namespace_name(nspOid))));
}
/*
* ExecuteDoStmt
* Execute inline procedural-language code
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
*
* See at ExecuteCallStmt() about the atomic argument.
*/
void
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
ExecuteDoStmt(DoStmt *stmt, bool atomic)
{
InlineCodeBlock *codeblock = makeNode(InlineCodeBlock);
ListCell *arg;
DefElem *as_item = NULL;
DefElem *language_item = NULL;
char *language;
Oid laninline;
HeapTuple languageTuple;
Form_pg_language languageStruct;
/* Process options we got from gram.y */
foreach(arg, stmt->args)
{
DefElem *defel = (DefElem *) lfirst(arg);
if (strcmp(defel->defname, "as") == 0)
{
if (as_item)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("conflicting or redundant options")));
as_item = defel;
}
else if (strcmp(defel->defname, "language") == 0)
{
if (language_item)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("conflicting or redundant options")));
language_item = defel;
}
else
elog(ERROR, "option \"%s\" not recognized",
defel->defname);
}
if (as_item)
codeblock->source_text = strVal(as_item->arg);
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("no inline code specified")));
/* if LANGUAGE option wasn't specified, use the default */
if (language_item)
language = strVal(language_item->arg);
else
language = "plpgsql";
/* Look up the language and validate permissions */
languageTuple = SearchSysCache1(LANGNAME, PointerGetDatum(language));
if (!HeapTupleIsValid(languageTuple))
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg("language \"%s\" does not exist", language),
Invent "trusted" extensions, and remove the pg_pltemplate catalog. This patch creates a new extension property, "trusted". An extension that's marked that way in its control file can be installed by a non-superuser who has the CREATE privilege on the current database, even if the extension contains objects that normally would have to be created by a superuser. The objects within the extension will (by default) be owned by the bootstrap superuser, but the extension itself will be owned by the calling user. This allows replicating the old behavior around trusted procedural languages, without all the special-case logic in CREATE LANGUAGE. We have, however, chosen to loosen the rules slightly: formerly, only a database owner could take advantage of the special case that allowed installation of a trusted language, but now anyone who has CREATE privilege can do so. Having done that, we can delete the pg_pltemplate catalog, moving the knowledge it contained into the extension script files for the various PLs. This ends up being no change at all for the in-core PLs, but it is a large step forward for external PLs: they can now have the same ease of installation as core PLs do. The old "trusted PL" behavior was only available to PLs that had entries in pg_pltemplate, but now any extension can be marked trusted if appropriate. This also removes one of the stumbling blocks for our Python 2 -> 3 migration, since the association of "plpythonu" with Python 2 is no longer hard-wired into pg_pltemplate's initial contents. Exactly where we go from here on that front remains to be settled, but one problem is fixed. Patch by me, reviewed by Peter Eisentraut, Stephen Frost, and others. Discussion: https://postgr.es/m/5889.1566415762@sss.pgh.pa.us
2020-01-30 00:42:43 +01:00
(extension_file_exists(language) ?
errhint("Use CREATE EXTENSION to load the language into the database.") : 0)));
languageStruct = (Form_pg_language) GETSTRUCT(languageTuple);
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
codeblock->langOid = languageStruct->oid;
codeblock->langIsTrusted = languageStruct->lanpltrusted;
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
codeblock->atomic = atomic;
if (languageStruct->lanpltrusted)
{
/* if trusted language, need USAGE privilege */
AclResult aclresult;
aclresult = pg_language_aclcheck(codeblock->langOid, GetUserId(),
ACL_USAGE);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, OBJECT_LANGUAGE,
NameStr(languageStruct->lanname));
}
else
{
/* if untrusted language, must be superuser */
if (!superuser())
aclcheck_error(ACLCHECK_NO_PRIV, OBJECT_LANGUAGE,
NameStr(languageStruct->lanname));
}
/* get the handler function's OID */
laninline = languageStruct->laninline;
if (!OidIsValid(laninline))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("language \"%s\" does not support inline code execution",
NameStr(languageStruct->lanname))));
ReleaseSysCache(languageTuple);
/* execute the inline handler */
OidFunctionCall1(laninline, PointerGetDatum(codeblock));
}
/*
* Execute CALL statement
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
*
* Inside a top-level CALL statement, transaction-terminating commands such as
* COMMIT or a PL-specific equivalent are allowed. The terminology in the SQL
* standard is that CALL establishes a non-atomic execution context. Most
* other commands establish an atomic execution context, in which transaction
* control actions are not allowed. If there are nested executions of CALL,
* we want to track the execution context recursively, so that the nested
* CALLs can also do transaction control. Note, however, that for example in
* CALL -> SELECT -> CALL, the second call cannot do transaction control,
* because the SELECT in between establishes an atomic execution context.
*
* So when ExecuteCallStmt() is called from the top level, we pass in atomic =
* false (recall that that means transactions = yes). We then create a
* CallContext node with content atomic = false, which is passed in the
* fcinfo->context field to the procedure invocation. The language
* implementation should then take appropriate measures to allow or prevent
* transaction commands based on that information, e.g., call
* SPI_connect_ext(SPI_OPT_NONATOMIC). The language should also pass on the
* atomic flag to any nested invocations to CALL.
*
* The expression data structures and execution context that we create
* within this function are children of the portalContext of the Portal
* that the CALL utility statement runs in. Therefore, any pass-by-ref
* values that we're passing to the procedure will survive transaction
* commits that might occur inside the procedure.
*/
void
ExecuteCallStmt(CallStmt *stmt, ParamListInfo params, bool atomic, DestReceiver *dest)
{
Change function call information to be variable length. Before this change FunctionCallInfoData, the struct arguments etc for V1 function calls are stored in, always had space for FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two arrays. For nearly every function call 100 arguments is far more than needed, therefore wasting memory. Arg and argnull being two separate arrays also guarantees that to access a single argument, two cachelines have to be touched. Change the layout so there's a single variable-length array with pairs of value / isnull. That drastically reduces memory consumption for most function calls (on x86-64 a two argument function now uses 64bytes, previously 936 bytes), and makes it very likely that argument value and its nullness are on the same cacheline. Arguments are stored in a new NullableDatum struct, which, due to padding, needs more memory per argument than before. But as usually far fewer arguments are stored, and individual arguments are cheaper to access, that's still a clear win. It's likely that there's other places where conversion to NullableDatum arrays would make sense, e.g. TupleTableSlots, but that's for another commit. Because the function call information is now variable-length allocations have to take the number of arguments into account. For heap allocations that can be done with SizeForFunctionCallInfoData(), for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro that helps to allocate an appropriately sized and aligned variable. Some places with stack allocation function call information don't know the number of arguments at compile time, and currently variably sized stack allocations aren't allowed in postgres. Therefore allow for FUNC_MAX_ARGS space in these cases. They're not that common, so for now that seems acceptable. Because of the need to allocate FunctionCallInfo of the appropriate size, older extensions may need to update their code. To avoid subtle breakages, the FunctionCallInfoData struct has been renamed to FunctionCallInfoBaseData. Most code only references FunctionCallInfo, so that shouldn't cause much collateral damage. This change is also a prerequisite for more efficient expression JIT compilation (by allocating the function call information on the stack, allowing LLVM to optimize it away); previously the size of the call information caused problems inside LLVM's optimizer. Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
2019-01-26 23:17:52 +01:00
LOCAL_FCINFO(fcinfo, FUNC_MAX_ARGS);
ListCell *lc;
FuncExpr *fexpr;
int nargs;
int i;
AclResult aclresult;
Oid *argtypes;
char **argnames;
char *argmodes;
FmgrInfo flinfo;
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
CallContext *callcontext;
EState *estate;
ExprContext *econtext;
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
HeapTuple tp;
PgStat_FunctionCallUsage fcusage;
Datum retval;
fexpr = stmt->funcexpr;
Assert(fexpr);
Assert(IsA(fexpr, FuncExpr));
aclresult = pg_proc_aclcheck(fexpr->funcid, GetUserId(), ACL_EXECUTE);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, OBJECT_PROCEDURE, get_func_name(fexpr->funcid));
/* Prep the context object we'll pass to the procedure */
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
callcontext = makeNode(CallContext);
callcontext->atomic = atomic;
tp = SearchSysCache1(PROCOID, ObjectIdGetDatum(fexpr->funcid));
if (!HeapTupleIsValid(tp))
elog(ERROR, "cache lookup failed for function %u", fexpr->funcid);
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
/*
* If proconfig is set we can't allow transaction commands because of the
* way the GUC stacking works: The transaction boundary would have to pop
* the proconfig setting off the stack. That restriction could be lifted
* by redesigning the GUC nesting mechanism a bit.
*/
if (!heap_attisnull(tp, Anum_pg_proc_proconfig, NULL))
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
callcontext->atomic = true;
/*
* In security definer procedures, we can't allow transaction commands.
* StartTransaction() insists that the security context stack is empty,
* and AbortTransaction() resets the security context. This could be
* reorganized, but right now it doesn't work.
*/
if (((Form_pg_proc) GETSTRUCT(tp))->prosecdef)
callcontext->atomic = true;
/*
* Expand named arguments, defaults, etc. We do not want to scribble on
* the passed-in CallStmt parse tree, so first flat-copy fexpr, allowing
* us to replace its args field. (Note that expand_function_arguments
* will not modify any of the passed-in data structure.)
*/
{
FuncExpr *nexpr = makeNode(FuncExpr);
memcpy(nexpr, fexpr, sizeof(FuncExpr));
fexpr = nexpr;
}
fexpr->args = expand_function_arguments(fexpr->args,
fexpr->funcresulttype,
tp);
nargs = list_length(fexpr->args);
get_func_arg_info(tp, &argtypes, &argnames, &argmodes);
Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
ReleaseSysCache(tp);
/* safety check; see ExecInitFunc() */
if (nargs > FUNC_MAX_ARGS)
ereport(ERROR,
(errcode(ERRCODE_TOO_MANY_ARGUMENTS),
errmsg_plural("cannot pass more than %d argument to a procedure",
"cannot pass more than %d arguments to a procedure",
FUNC_MAX_ARGS,
FUNC_MAX_ARGS)));
/* Initialize function call structure */
InvokeFunctionExecuteHook(fexpr->funcid);
fmgr_info(fexpr->funcid, &flinfo);
fmgr_info_set_expr((Node *) fexpr, &flinfo);
Change function call information to be variable length. Before this change FunctionCallInfoData, the struct arguments etc for V1 function calls are stored in, always had space for FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two arrays. For nearly every function call 100 arguments is far more than needed, therefore wasting memory. Arg and argnull being two separate arrays also guarantees that to access a single argument, two cachelines have to be touched. Change the layout so there's a single variable-length array with pairs of value / isnull. That drastically reduces memory consumption for most function calls (on x86-64 a two argument function now uses 64bytes, previously 936 bytes), and makes it very likely that argument value and its nullness are on the same cacheline. Arguments are stored in a new NullableDatum struct, which, due to padding, needs more memory per argument than before. But as usually far fewer arguments are stored, and individual arguments are cheaper to access, that's still a clear win. It's likely that there's other places where conversion to NullableDatum arrays would make sense, e.g. TupleTableSlots, but that's for another commit. Because the function call information is now variable-length allocations have to take the number of arguments into account. For heap allocations that can be done with SizeForFunctionCallInfoData(), for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro that helps to allocate an appropriately sized and aligned variable. Some places with stack allocation function call information don't know the number of arguments at compile time, and currently variably sized stack allocations aren't allowed in postgres. Therefore allow for FUNC_MAX_ARGS space in these cases. They're not that common, so for now that seems acceptable. Because of the need to allocate FunctionCallInfo of the appropriate size, older extensions may need to update their code. To avoid subtle breakages, the FunctionCallInfoData struct has been renamed to FunctionCallInfoBaseData. Most code only references FunctionCallInfo, so that shouldn't cause much collateral damage. This change is also a prerequisite for more efficient expression JIT compilation (by allocating the function call information on the stack, allowing LLVM to optimize it away); previously the size of the call information caused problems inside LLVM's optimizer. Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
2019-01-26 23:17:52 +01:00
InitFunctionCallInfoData(*fcinfo, &flinfo, nargs, fexpr->inputcollid,
(Node *) callcontext, NULL);
/*
* Evaluate procedure arguments inside a suitable execution context. Note
* we can't free this context till the procedure returns.
*/
estate = CreateExecutorState();
estate->es_param_list_info = params;
econtext = CreateExprContext(estate);
i = 0;
foreach(lc, fexpr->args)
{
if (argmodes && argmodes[i] == PROARGMODE_OUT)
{
fcinfo->args[i].value = 0;
fcinfo->args[i].isnull = true;
}
else
{
ExprState *exprstate;
Datum val;
bool isnull;
exprstate = ExecPrepareExpr(lfirst(lc), estate);
val = ExecEvalExprSwitchContext(exprstate, econtext, &isnull);
fcinfo->args[i].value = val;
fcinfo->args[i].isnull = isnull;
}
i++;
}
Change function call information to be variable length. Before this change FunctionCallInfoData, the struct arguments etc for V1 function calls are stored in, always had space for FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two arrays. For nearly every function call 100 arguments is far more than needed, therefore wasting memory. Arg and argnull being two separate arrays also guarantees that to access a single argument, two cachelines have to be touched. Change the layout so there's a single variable-length array with pairs of value / isnull. That drastically reduces memory consumption for most function calls (on x86-64 a two argument function now uses 64bytes, previously 936 bytes), and makes it very likely that argument value and its nullness are on the same cacheline. Arguments are stored in a new NullableDatum struct, which, due to padding, needs more memory per argument than before. But as usually far fewer arguments are stored, and individual arguments are cheaper to access, that's still a clear win. It's likely that there's other places where conversion to NullableDatum arrays would make sense, e.g. TupleTableSlots, but that's for another commit. Because the function call information is now variable-length allocations have to take the number of arguments into account. For heap allocations that can be done with SizeForFunctionCallInfoData(), for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro that helps to allocate an appropriately sized and aligned variable. Some places with stack allocation function call information don't know the number of arguments at compile time, and currently variably sized stack allocations aren't allowed in postgres. Therefore allow for FUNC_MAX_ARGS space in these cases. They're not that common, so for now that seems acceptable. Because of the need to allocate FunctionCallInfo of the appropriate size, older extensions may need to update their code. To avoid subtle breakages, the FunctionCallInfoData struct has been renamed to FunctionCallInfoBaseData. Most code only references FunctionCallInfo, so that shouldn't cause much collateral damage. This change is also a prerequisite for more efficient expression JIT compilation (by allocating the function call information on the stack, allowing LLVM to optimize it away); previously the size of the call information caused problems inside LLVM's optimizer. Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
2019-01-26 23:17:52 +01:00
pgstat_init_function_usage(fcinfo, &fcusage);
retval = FunctionCallInvoke(fcinfo);
pgstat_end_function_usage(&fcusage, true);
if (fexpr->funcresulttype == VOIDOID)
{
/* do nothing */
}
else if (fexpr->funcresulttype == RECORDOID)
{
/*
* send tuple to client
*/
HeapTupleHeader td;
Oid tupType;
int32 tupTypmod;
TupleDesc retdesc;
HeapTupleData rettupdata;
TupOutputState *tstate;
TupleTableSlot *slot;
Change function call information to be variable length. Before this change FunctionCallInfoData, the struct arguments etc for V1 function calls are stored in, always had space for FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two arrays. For nearly every function call 100 arguments is far more than needed, therefore wasting memory. Arg and argnull being two separate arrays also guarantees that to access a single argument, two cachelines have to be touched. Change the layout so there's a single variable-length array with pairs of value / isnull. That drastically reduces memory consumption for most function calls (on x86-64 a two argument function now uses 64bytes, previously 936 bytes), and makes it very likely that argument value and its nullness are on the same cacheline. Arguments are stored in a new NullableDatum struct, which, due to padding, needs more memory per argument than before. But as usually far fewer arguments are stored, and individual arguments are cheaper to access, that's still a clear win. It's likely that there's other places where conversion to NullableDatum arrays would make sense, e.g. TupleTableSlots, but that's for another commit. Because the function call information is now variable-length allocations have to take the number of arguments into account. For heap allocations that can be done with SizeForFunctionCallInfoData(), for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro that helps to allocate an appropriately sized and aligned variable. Some places with stack allocation function call information don't know the number of arguments at compile time, and currently variably sized stack allocations aren't allowed in postgres. Therefore allow for FUNC_MAX_ARGS space in these cases. They're not that common, so for now that seems acceptable. Because of the need to allocate FunctionCallInfo of the appropriate size, older extensions may need to update their code. To avoid subtle breakages, the FunctionCallInfoData struct has been renamed to FunctionCallInfoBaseData. Most code only references FunctionCallInfo, so that shouldn't cause much collateral damage. This change is also a prerequisite for more efficient expression JIT compilation (by allocating the function call information on the stack, allowing LLVM to optimize it away); previously the size of the call information caused problems inside LLVM's optimizer. Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
2019-01-26 23:17:52 +01:00
if (fcinfo->isnull)
elog(ERROR, "procedure returned null record");
td = DatumGetHeapTupleHeader(retval);
tupType = HeapTupleHeaderGetTypeId(td);
tupTypmod = HeapTupleHeaderGetTypMod(td);
retdesc = lookup_rowtype_tupdesc(tupType, tupTypmod);
Introduce notion of different types of slots (without implementing them). Upcoming work intends to allow pluggable ways to introduce new ways of storing table data. Accessing those table access methods from the executor requires TupleTableSlots to be carry tuples in the native format of such storage methods; otherwise there'll be a significant conversion overhead. Different access methods will require different data to store tuples efficiently (just like virtual, minimal, heap already require fields in TupleTableSlot). To allow that without requiring additional pointer indirections, we want to have different structs (embedding TupleTableSlot) for different types of slots. Thus different types of slots are needed, which requires adapting creators of slots. The slot that most efficiently can represent a type of tuple in an executor node will often depend on the type of slot a child node uses. Therefore we need to track the type of slot is returned by nodes, so parent slots can create slots based on that. Relatedly, JIT compilation of tuple deforming needs to know which type of slot a certain expression refers to, so it can create an appropriate deforming function for the type of tuple in the slot. But not all nodes will only return one type of slot, e.g. an append node will potentially return different types of slots for each of its subplans. Therefore add function that allows to query the type of a node's result slot, and whether it'll always be the same type (whether it's fixed). This can be queried using ExecGetResultSlotOps(). The scan, result, inner, outer type of slots are automatically inferred from ExecInitScanTupleSlot(), ExecInitResultSlot(), left/right subtrees respectively. If that's not correct for a node, that can be overwritten using new fields in PlanState. This commit does not introduce the actually abstracted implementation of different kind of TupleTableSlots, that will be left for a followup commit. The different types of slots introduced will, for now, still use the same backing implementation. While this already partially invalidates the big comment in tuptable.h, it seems to make more sense to update it later, when the different TupleTableSlot implementations actually exist. Author: Ashutosh Bapat and Andres Freund, with changes by Amit Khandekar Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-16 07:00:30 +01:00
tstate = begin_tup_output_tupdesc(dest, retdesc,
&TTSOpsHeapTuple);
rettupdata.t_len = HeapTupleHeaderGetDatumLength(td);
ItemPointerSetInvalid(&(rettupdata.t_self));
rettupdata.t_tableOid = InvalidOid;
rettupdata.t_data = td;
slot = ExecStoreHeapTuple(&rettupdata, tstate->slot, false);
tstate->dest->receiveSlot(slot, tstate->dest);
end_tup_output(tstate);
ReleaseTupleDesc(retdesc);
}
else
elog(ERROR, "unexpected result type for procedure: %u",
fexpr->funcresulttype);
FreeExecutorState(estate);
}
/*
* Construct the tuple descriptor for a CALL statement return
*/
TupleDesc
CallStmtResultDesc(CallStmt *stmt)
{
FuncExpr *fexpr;
HeapTuple tuple;
TupleDesc tupdesc;
fexpr = stmt->funcexpr;
tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(fexpr->funcid));
if (!HeapTupleIsValid(tuple))
elog(ERROR, "cache lookup failed for procedure %u", fexpr->funcid);
tupdesc = build_function_result_tupdesc_t(tuple);
ReleaseSysCache(tuple);
return tupdesc;
}