2002-04-15 07:22:04 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* functioncmds.c
|
|
|
|
*
|
2003-11-21 23:32:49 +01:00
|
|
|
* Routines for CREATE and DROP FUNCTION commands and CREATE and DROP
|
2005-03-14 01:19:37 +01:00
|
|
|
* CAST commands.
|
2002-04-15 07:22:04 +02:00
|
|
|
*
|
2023-01-02 21:00:37 +01:00
|
|
|
* Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
|
2002-04-15 07:22:04 +02:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/backend/commands/functioncmds.c
|
2002-04-15 07:22:04 +02:00
|
|
|
*
|
|
|
|
* DESCRIPTION
|
|
|
|
* These routines take the parse tree and pick out the
|
|
|
|
* appropriate arguments/flags, and pass the results to the
|
|
|
|
* corresponding "FooDefine" routines (in src/catalog) that do
|
|
|
|
* the actual catalog-munging. These routines also verify permission
|
|
|
|
* of the user to execute the command.
|
|
|
|
*
|
|
|
|
* NOTES
|
|
|
|
* These things must be defined and committed in the following order:
|
|
|
|
* "create function":
|
|
|
|
* input/output, recv/send procedures
|
|
|
|
* "create type":
|
|
|
|
* type
|
|
|
|
* "create operator":
|
|
|
|
* operators
|
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#include "postgres.h"
|
|
|
|
|
2019-12-27 00:09:00 +01:00
|
|
|
#include "access/genam.h"
|
2012-08-30 22:15:44 +02:00
|
|
|
#include "access/htup_details.h"
|
2008-05-12 02:00:54 +02:00
|
|
|
#include "access/sysattr.h"
|
2019-11-12 04:00:16 +01:00
|
|
|
#include "access/table.h"
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
#include "catalog/catalog.h"
|
2002-07-12 20:43:19 +02:00
|
|
|
#include "catalog/dependency.h"
|
2002-07-19 01:11:32 +02:00
|
|
|
#include "catalog/indexing.h"
|
2010-11-25 17:48:49 +01:00
|
|
|
#include "catalog/objectaccess.h"
|
2005-04-14 22:03:27 +02:00
|
|
|
#include "catalog/pg_aggregate.h"
|
2002-07-19 01:11:32 +02:00
|
|
|
#include "catalog/pg_cast.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "catalog/pg_language.h"
|
2005-08-01 06:03:59 +02:00
|
|
|
#include "catalog/pg_namespace.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "catalog/pg_proc.h"
|
2015-04-26 16:33:14 +02:00
|
|
|
#include "catalog/pg_transform.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "catalog/pg_type.h"
|
2012-09-27 23:13:09 +02:00
|
|
|
#include "commands/alter.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "commands/defrem.h"
|
Invent "trusted" extensions, and remove the pg_pltemplate catalog.
This patch creates a new extension property, "trusted". An extension
that's marked that way in its control file can be installed by a
non-superuser who has the CREATE privilege on the current database,
even if the extension contains objects that normally would have to be
created by a superuser. The objects within the extension will (by
default) be owned by the bootstrap superuser, but the extension itself
will be owned by the calling user. This allows replicating the old
behavior around trusted procedural languages, without all the
special-case logic in CREATE LANGUAGE. We have, however, chosen to
loosen the rules slightly: formerly, only a database owner could take
advantage of the special case that allowed installation of a trusted
language, but now anyone who has CREATE privilege can do so.
Having done that, we can delete the pg_pltemplate catalog, moving the
knowledge it contained into the extension script files for the various
PLs. This ends up being no change at all for the in-core PLs, but it is
a large step forward for external PLs: they can now have the same ease
of installation as core PLs do. The old "trusted PL" behavior was only
available to PLs that had entries in pg_pltemplate, but now any
extension can be marked trusted if appropriate.
This also removes one of the stumbling blocks for our Python 2 -> 3
migration, since the association of "plpythonu" with Python 2 is no
longer hard-wired into pg_pltemplate's initial contents. Exactly where
we go from here on that front remains to be settled, but one problem
is fixed.
Patch by me, reviewed by Peter Eisentraut, Stephen Frost, and others.
Discussion: https://postgr.es/m/5889.1566415762@sss.pgh.pa.us
2020-01-30 00:42:43 +01:00
|
|
|
#include "commands/extension.h"
|
2005-09-08 22:07:42 +02:00
|
|
|
#include "commands/proclang.h"
|
2017-11-30 14:46:13 +01:00
|
|
|
#include "executor/execdesc.h"
|
|
|
|
#include "executor/executor.h"
|
2021-04-07 21:30:08 +02:00
|
|
|
#include "executor/functions.h"
|
2018-07-09 13:58:08 +02:00
|
|
|
#include "funcapi.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "miscadmin.h"
|
2019-01-29 21:48:51 +01:00
|
|
|
#include "optimizer/optimizer.h"
|
2021-04-07 21:30:08 +02:00
|
|
|
#include "parser/analyze.h"
|
2008-07-11 09:02:43 +02:00
|
|
|
#include "parser/parse_coerce.h"
|
2011-03-20 01:29:08 +01:00
|
|
|
#include "parser/parse_collate.h"
|
2008-12-04 18:51:28 +01:00
|
|
|
#include "parser/parse_expr.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "parser/parse_func.h"
|
|
|
|
#include "parser/parse_type.h"
|
2018-10-05 14:14:03 +02:00
|
|
|
#include "pgstat.h"
|
Restore the portal-level snapshot after procedure COMMIT/ROLLBACK.
COMMIT/ROLLBACK necessarily destroys all snapshots within the session.
The original implementation of intra-procedure transactions just
cavalierly did that, ignoring the fact that this left us executing in
a rather different environment than normal. In particular, it turns
out that handling of toasted datums depends rather critically on there
being an outer ActiveSnapshot: otherwise, when SPI or the core
executor pop whatever snapshot they used and return, it's unsafe to
dereference any toasted datums that may appear in the query result.
It's possible to demonstrate "no known snapshots" and "missing chunk
number N for toast value" errors as a result of this oversight.
Historically this outer snapshot has been held by the Portal code,
and that seems like a good plan to preserve. So add infrastructure
to pquery.c to allow re-establishing the Portal-owned snapshot if it's
not there anymore, and add enough bookkeeping support that we can tell
whether it is or not.
We can't, however, just re-establish the Portal snapshot as part of
COMMIT/ROLLBACK. As in normal transaction start, acquiring the first
snapshot should wait until after SET and LOCK commands. Hence, teach
spi.c about doing this at the right time. (Note that this patch
doesn't fix the problem for any PLs that try to run intra-procedure
transactions without using SPI to execute SQL commands.)
This makes SPI's no_snapshots parameter rather a misnomer, so in HEAD,
rename that to allow_nonatomic.
replication/logical/worker.c also needs some fixes, because it wasn't
careful to hold a snapshot open around AFTER trigger execution.
That code doesn't use a Portal, which I suspect someday we're gonna
have to fix. But for now, just rearrange the order of operations.
This includes back-patching the recent addition of finish_estate()
to centralize the cleanup logic there.
This also back-patches commit 2ecfeda3e into v13, to improve the
test coverage for worker.c (it was that test that exposed that
worker.c's snapshot management is wrong).
Per bug #15990 from Andreas Wicht. Back-patch to v11 where
intra-procedure COMMIT was added.
Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
2021-05-21 20:03:53 +02:00
|
|
|
#include "tcop/pquery.h"
|
2021-04-07 21:30:08 +02:00
|
|
|
#include "tcop/utility.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "utils/acl.h"
|
2002-07-30 01:44:44 +02:00
|
|
|
#include "utils/builtins.h"
|
2002-07-19 01:11:32 +02:00
|
|
|
#include "utils/fmgroids.h"
|
2011-09-04 07:13:16 +02:00
|
|
|
#include "utils/guc.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "utils/lsyscache.h"
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
#include "utils/memutils.h"
|
2008-06-19 02:46:06 +02:00
|
|
|
#include "utils/rel.h"
|
2021-09-22 01:06:33 +02:00
|
|
|
#include "utils/snapmgr.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "utils/syscache.h"
|
2018-03-14 16:47:21 +01:00
|
|
|
#include "utils/typcache.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/*
|
2005-04-01 00:46:33 +02:00
|
|
|
* Examine the RETURNS clause of the CREATE FUNCTION statement
|
2002-04-15 07:22:04 +02:00
|
|
|
* and return information about it as *prorettype_p and *returnsSet.
|
|
|
|
*
|
|
|
|
* This is more complex than the average typename lookup because we want to
|
|
|
|
* allow a shell type to be used, or even created if the specified return type
|
|
|
|
* doesn't exist yet. (Without this, there's no way to define the I/O procs
|
|
|
|
* for a new type.) But SQL function creation won't cope, so error out if
|
2002-08-22 02:01:51 +02:00
|
|
|
* the target language is SQL. (We do this here, not in the SQL-function
|
2002-11-01 20:19:58 +01:00
|
|
|
* validator, so as not to produce a NOTICE and then an ERROR for the same
|
2002-08-22 02:01:51 +02:00
|
|
|
* condition.)
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
|
|
|
static void
|
|
|
|
compute_return_type(TypeName *returnType, Oid languageOid,
|
|
|
|
Oid *prorettype_p, bool *returnsSet_p)
|
|
|
|
{
|
|
|
|
Oid rettype;
|
2007-11-11 20:22:49 +01:00
|
|
|
Type typtup;
|
2011-12-19 23:05:19 +01:00
|
|
|
AclResult aclresult;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2014-01-23 18:40:29 +01:00
|
|
|
typtup = LookupTypeName(NULL, returnType, NULL, false);
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2007-11-11 20:22:49 +01:00
|
|
|
if (typtup)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2007-11-11 20:22:49 +01:00
|
|
|
if (!((Form_pg_type) GETSTRUCT(typtup))->typisdefined)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
|
|
|
if (languageOid == SQLlanguageId)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("SQL function cannot return shell type %s",
|
|
|
|
TypeNameToString(returnType))));
|
2002-04-15 07:22:04 +02:00
|
|
|
else
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(NOTICE,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("return type %s is only a shell",
|
|
|
|
TypeNameToString(returnType))));
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
2007-11-11 20:22:49 +01:00
|
|
|
rettype = typeTypeId(typtup);
|
|
|
|
ReleaseSysCache(typtup);
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
char *typnam = TypeNameToString(returnType);
|
2002-08-22 02:01:51 +02:00
|
|
|
Oid namespaceId;
|
|
|
|
char *typname;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2002-08-22 02:01:51 +02:00
|
|
|
/*
|
|
|
|
* Only C-coded functions can be I/O functions. We enforce this
|
|
|
|
* restriction here mainly to prevent littering the catalogs with
|
|
|
|
* shell types due to simple typos in user-defined function
|
|
|
|
* definitions.
|
|
|
|
*/
|
|
|
|
if (languageOid != INTERNALlanguageId &&
|
|
|
|
languageOid != ClanguageId)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
2003-07-28 02:09:16 +02:00
|
|
|
errmsg("type \"%s\" does not exist", typnam)));
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2007-11-11 20:22:49 +01:00
|
|
|
/* Reject if there's typmod decoration, too */
|
|
|
|
if (returnType->typmods != NIL)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("type modifier cannot be specified for shell type \"%s\"",
|
|
|
|
typnam)));
|
|
|
|
|
2002-08-22 02:01:51 +02:00
|
|
|
/* Otherwise, go ahead and make a shell type */
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(NOTICE,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
2003-07-28 02:09:16 +02:00
|
|
|
errmsg("type \"%s\" is not yet defined", typnam),
|
2003-07-19 01:20:33 +02:00
|
|
|
errdetail("Creating a shell type definition.")));
|
2002-08-22 02:01:51 +02:00
|
|
|
namespaceId = QualifiedNameGetCreationNamespace(returnType->names,
|
|
|
|
&typname);
|
2022-11-13 08:11:17 +01:00
|
|
|
aclresult = object_aclcheck(NamespaceRelationId, namespaceId, GetUserId(),
|
2023-05-19 23:24:48 +02:00
|
|
|
ACL_CREATE);
|
2002-08-22 02:01:51 +02:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_SCHEMA,
|
2003-08-01 02:15:26 +02:00
|
|
|
get_namespace_name(namespaceId));
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
address = TypeShellMake(typname, namespaceId, GetUserId());
|
|
|
|
rettype = address.objectId;
|
2003-07-19 01:20:33 +02:00
|
|
|
Assert(OidIsValid(rettype));
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
2022-11-13 08:11:17 +01:00
|
|
|
aclresult = object_aclcheck(TypeRelationId, rettype, GetUserId(), ACL_USAGE);
|
2011-12-19 23:05:19 +01:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
2012-06-15 21:55:03 +02:00
|
|
|
aclcheck_error_type(aclresult, rettype);
|
2011-12-19 23:05:19 +01:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
*prorettype_p = rettype;
|
|
|
|
*returnsSet_p = returnType->setof;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
* Interpret the function parameter list of a CREATE FUNCTION,
|
|
|
|
* CREATE PROCEDURE, or CREATE AGGREGATE statement.
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
*
|
|
|
|
* Input parameters:
|
|
|
|
* parameters: list of FunctionParameter structs
|
|
|
|
* languageOid: OID of function language (InvalidOid if it's CREATE AGGREGATE)
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
* objtype: identifies type of object being created
|
2005-04-01 00:46:33 +02:00
|
|
|
*
|
|
|
|
* Results are stored into output parameters. parameterTypes must always
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
* be created, but the other arrays/lists can be NULL pointers if not needed.
|
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
|
|
|
* variadicArgType is set to the variadic array type if there's a VARIADIC
|
|
|
|
* parameter (there can be only one); or to InvalidOid if not.
|
2005-04-01 00:46:33 +02:00
|
|
|
* requiredResultType is set to InvalidOid if there are no OUT parameters,
|
|
|
|
* else it is set to the OID of the implied result type.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
void
|
2016-09-06 18:00:00 +02:00
|
|
|
interpret_function_parameter_list(ParseState *pstate,
|
|
|
|
List *parameters,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
Oid languageOid,
|
2017-11-30 14:46:13 +01:00
|
|
|
ObjectType objtype,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
oidvector **parameterTypes,
|
2021-04-07 21:30:08 +02:00
|
|
|
List **parameterTypes_list,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
ArrayType **allParameterTypes,
|
|
|
|
ArrayType **parameterModes,
|
|
|
|
ArrayType **parameterNames,
|
2021-04-07 21:30:08 +02:00
|
|
|
List **inParameterNames_list,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
List **parameterDefaults,
|
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
|
|
|
Oid *variadicArgType,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
Oid *requiredResultType)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2005-04-01 00:46:33 +02:00
|
|
|
int parameterCount = list_length(parameters);
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
Oid *inTypes;
|
|
|
|
int inCount = 0;
|
2005-04-01 00:46:33 +02:00
|
|
|
Datum *allTypes;
|
|
|
|
Datum *paramModes;
|
|
|
|
Datum *paramNames;
|
|
|
|
int outCount = 0;
|
2008-07-16 03:30:23 +02:00
|
|
|
int varCount = 0;
|
2005-04-01 00:46:33 +02:00
|
|
|
bool have_names = false;
|
2008-12-18 19:20:35 +01:00
|
|
|
bool have_defaults = false;
|
2004-05-26 06:41:50 +02:00
|
|
|
ListCell *x;
|
2005-04-01 00:46:33 +02:00
|
|
|
int i;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
|
|
|
*variadicArgType = InvalidOid; /* default result */
|
2005-09-25 00:54:44 +02:00
|
|
|
*requiredResultType = InvalidOid; /* default result */
|
|
|
|
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
inTypes = (Oid *) palloc(parameterCount * sizeof(Oid));
|
2005-04-01 00:46:33 +02:00
|
|
|
allTypes = (Datum *) palloc(parameterCount * sizeof(Datum));
|
|
|
|
paramModes = (Datum *) palloc(parameterCount * sizeof(Datum));
|
|
|
|
paramNames = (Datum *) palloc0(parameterCount * sizeof(Datum));
|
2008-12-04 18:51:28 +01:00
|
|
|
*parameterDefaults = NIL;
|
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
/* Scan the list and extract data into work arrays */
|
|
|
|
i = 0;
|
|
|
|
foreach(x, parameters)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2004-01-07 00:55:19 +01:00
|
|
|
FunctionParameter *fp = (FunctionParameter *) lfirst(x);
|
|
|
|
TypeName *t = fp->argType;
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
FunctionParameterMode fpmode = fp->mode;
|
2008-12-18 19:20:35 +01:00
|
|
|
bool isinput = false;
|
2002-04-15 07:22:04 +02:00
|
|
|
Oid toid;
|
2007-11-11 20:22:49 +01:00
|
|
|
Type typtup;
|
2011-12-19 23:05:19 +01:00
|
|
|
AclResult aclresult;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
/* For our purposes here, a defaulted mode spec is identical to IN */
|
|
|
|
if (fpmode == FUNC_PARAM_DEFAULT)
|
|
|
|
fpmode = FUNC_PARAM_IN;
|
|
|
|
|
2014-01-23 18:40:29 +01:00
|
|
|
typtup = LookupTypeName(NULL, t, NULL, false);
|
2007-11-11 20:22:49 +01:00
|
|
|
if (typtup)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2007-11-11 20:22:49 +01:00
|
|
|
if (!((Form_pg_type) GETSTRUCT(typtup))->typisdefined)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2002-08-22 02:01:51 +02:00
|
|
|
/* As above, hard error if language is SQL */
|
2002-04-15 07:22:04 +02:00
|
|
|
if (languageOid == SQLlanguageId)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("SQL function cannot accept shell type %s",
|
|
|
|
TypeNameToString(t))));
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
/* We don't allow creating aggregates on shell types either */
|
2017-11-30 14:46:13 +01:00
|
|
|
else if (objtype == OBJECT_AGGREGATE)
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("aggregate cannot accept shell type %s",
|
|
|
|
TypeNameToString(t))));
|
2002-08-22 02:01:51 +02:00
|
|
|
else
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(NOTICE,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("argument type %s is only a shell",
|
|
|
|
TypeNameToString(t))));
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
2007-11-11 20:22:49 +01:00
|
|
|
toid = typeTypeId(typtup);
|
|
|
|
ReleaseSysCache(typtup);
|
2002-08-22 02:01:51 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
|
|
|
errmsg("type %s does not exist",
|
|
|
|
TypeNameToString(t))));
|
2007-11-11 20:22:49 +01:00
|
|
|
toid = InvalidOid; /* keep compiler quiet */
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
2022-11-13 08:11:17 +01:00
|
|
|
aclresult = object_aclcheck(TypeRelationId, toid, GetUserId(), ACL_USAGE);
|
2011-12-19 23:05:19 +01:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
2012-06-15 21:55:03 +02:00
|
|
|
aclcheck_error_type(aclresult, toid);
|
2011-12-19 23:05:19 +01:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
if (t->setof)
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (objtype == OBJECT_AGGREGATE)
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("aggregates cannot accept set arguments")));
|
2017-11-30 14:46:13 +01:00
|
|
|
else if (objtype == OBJECT_PROCEDURE)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("procedures cannot accept set arguments")));
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("functions cannot accept set arguments")));
|
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2008-07-18 05:32:53 +02:00
|
|
|
/* handle input parameters */
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
if (fpmode != FUNC_PARAM_OUT && fpmode != FUNC_PARAM_TABLE)
|
2021-04-07 21:30:08 +02:00
|
|
|
{
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
/* other input parameters can't follow a VARIADIC parameter */
|
2008-07-16 03:30:23 +02:00
|
|
|
if (varCount > 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
errmsg("VARIADIC parameter must be the last input parameter")));
|
|
|
|
inTypes[inCount++] = toid;
|
|
|
|
isinput = true;
|
|
|
|
if (parameterTypes_list)
|
|
|
|
*parameterTypes_list = lappend_oid(*parameterTypes_list, toid);
|
2008-07-16 03:30:23 +02:00
|
|
|
}
|
2005-04-01 00:46:33 +02:00
|
|
|
|
2008-07-18 05:32:53 +02:00
|
|
|
/* handle output parameters */
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
if (fpmode != FUNC_PARAM_IN && fpmode != FUNC_PARAM_VARIADIC)
|
2005-04-01 00:46:33 +02:00
|
|
|
{
|
2018-03-14 16:47:21 +01:00
|
|
|
if (objtype == OBJECT_PROCEDURE)
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We disallow OUT-after-VARIADIC only for procedures. While
|
|
|
|
* such a case causes no confusion in ordinary function calls,
|
|
|
|
* it would cause confusion in a CALL statement.
|
|
|
|
*/
|
|
|
|
if (varCount > 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("VARIADIC parameter must be the last parameter")));
|
|
|
|
/* Procedures with output parameters always return RECORD */
|
2018-03-14 16:47:21 +01:00
|
|
|
*requiredResultType = RECORDOID;
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
}
|
2018-03-14 16:47:21 +01:00
|
|
|
else if (outCount == 0) /* save first output param's type */
|
2005-04-01 00:46:33 +02:00
|
|
|
*requiredResultType = toid;
|
|
|
|
outCount++;
|
|
|
|
}
|
|
|
|
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
if (fpmode == FUNC_PARAM_VARIADIC)
|
2008-07-16 03:30:23 +02:00
|
|
|
{
|
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
|
|
|
*variadicArgType = toid;
|
2008-07-16 03:30:23 +02:00
|
|
|
varCount++;
|
|
|
|
/* validate variadic parameter type */
|
|
|
|
switch (toid)
|
|
|
|
{
|
|
|
|
case ANYARRAYOID:
|
Introduce "anycompatible" family of polymorphic types.
This patch adds the pseudo-types anycompatible, anycompatiblearray,
anycompatiblenonarray, and anycompatiblerange. They work much like
anyelement, anyarray, anynonarray, and anyrange respectively, except
that the actual input values need not match precisely in type.
Instead, if we can find a common supertype (using the same rules
as for UNION/CASE type resolution), then the parser automatically
promotes the input values to that type. For example,
"myfunc(anycompatible, anycompatible)" can match a call with one
integer and one bigint argument, with the integer automatically
promoted to bigint. With anyelement in the definition, the user
would have had to cast the integer explicitly.
The new types also provide a second, independent set of type variables
for function matching; thus with "myfunc(anyelement, anyelement,
anycompatible) returns anycompatible" the first two arguments are
constrained to be the same type, but the third can be some other
type, and the result has the type of the third argument. The need
for more than one set of type variables was foreseen back when we
first invented the polymorphic types, but we never did anything
about it.
Pavel Stehule, revised a bit by me
Discussion: https://postgr.es/m/CAFj8pRDna7VqNi8gR+Tt2Ktmz0cq5G93guc3Sbn_NVPLdXAkqA@mail.gmail.com
2020-03-19 16:43:11 +01:00
|
|
|
case ANYCOMPATIBLEARRAYOID:
|
2008-07-16 03:30:23 +02:00
|
|
|
case ANYOID:
|
|
|
|
/* okay */
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
if (!OidIsValid(get_element_type(toid)))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("VARIADIC parameter must be an array")));
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
allTypes[i] = ObjectIdGetDatum(toid);
|
2004-01-07 00:55:19 +01:00
|
|
|
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
paramModes[i] = CharGetDatum(fpmode);
|
2004-01-07 00:55:19 +01:00
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
if (fp->name && fp->name[0])
|
|
|
|
{
|
2009-10-08 04:39:25 +02:00
|
|
|
ListCell *px;
|
|
|
|
|
|
|
|
/*
|
2010-02-17 05:19:41 +01:00
|
|
|
* As of Postgres 9.0 we disallow using the same name for two
|
2009-10-08 04:39:25 +02:00
|
|
|
* input or two output function parameters. Depending on the
|
|
|
|
* function's language, conflicting input and output names might
|
|
|
|
* be bad too, but we leave it to the PL to complain if so.
|
|
|
|
*/
|
|
|
|
foreach(px, parameters)
|
|
|
|
{
|
|
|
|
FunctionParameter *prevfp = (FunctionParameter *) lfirst(px);
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
FunctionParameterMode prevfpmode;
|
2009-10-08 04:39:25 +02:00
|
|
|
|
|
|
|
if (prevfp == fp)
|
|
|
|
break;
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
/* as above, default mode is IN */
|
|
|
|
prevfpmode = prevfp->mode;
|
|
|
|
if (prevfpmode == FUNC_PARAM_DEFAULT)
|
|
|
|
prevfpmode = FUNC_PARAM_IN;
|
2009-10-08 04:39:25 +02:00
|
|
|
/* pure in doesn't conflict with pure out */
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
if ((fpmode == FUNC_PARAM_IN ||
|
|
|
|
fpmode == FUNC_PARAM_VARIADIC) &&
|
|
|
|
(prevfpmode == FUNC_PARAM_OUT ||
|
|
|
|
prevfpmode == FUNC_PARAM_TABLE))
|
2009-10-08 04:39:25 +02:00
|
|
|
continue;
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
if ((prevfpmode == FUNC_PARAM_IN ||
|
|
|
|
prevfpmode == FUNC_PARAM_VARIADIC) &&
|
|
|
|
(fpmode == FUNC_PARAM_OUT ||
|
|
|
|
fpmode == FUNC_PARAM_TABLE))
|
2009-10-08 04:39:25 +02:00
|
|
|
continue;
|
|
|
|
if (prevfp->name && prevfp->name[0] &&
|
|
|
|
strcmp(prevfp->name, fp->name) == 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("parameter name \"%s\" used more than once",
|
|
|
|
fp->name)));
|
|
|
|
}
|
|
|
|
|
2008-03-25 23:42:46 +01:00
|
|
|
paramNames[i] = CStringGetTextDatum(fp->name);
|
2005-04-01 00:46:33 +02:00
|
|
|
have_names = true;
|
|
|
|
}
|
|
|
|
|
2021-04-07 21:30:08 +02:00
|
|
|
if (inParameterNames_list)
|
|
|
|
*inParameterNames_list = lappend(*inParameterNames_list, makeString(fp->name ? fp->name : pstrdup("")));
|
|
|
|
|
2008-12-04 18:51:28 +01:00
|
|
|
if (fp->defexpr)
|
|
|
|
{
|
2008-12-18 19:20:35 +01:00
|
|
|
Node *def;
|
|
|
|
|
|
|
|
if (!isinput)
|
2008-12-04 18:51:28 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
2008-12-18 19:20:35 +01:00
|
|
|
errmsg("only input parameters can have default values")));
|
|
|
|
|
Centralize the logic for detecting misplaced aggregates, window funcs, etc.
Formerly we relied on checking after-the-fact to see if an expression
contained aggregates, window functions, or sub-selects when it shouldn't.
This is grotty, easily forgotten (indeed, we had forgotten to teach
DefineIndex about rejecting window functions), and none too efficient
since it requires extra traversals of the parse tree. To improve matters,
define an enum type that classifies all SQL sub-expressions, store it in
ParseState to show what kind of expression we are currently parsing, and
make transformAggregateCall, transformWindowFuncCall, and transformSubLink
check the expression type and throw error if the type indicates the
construct is disallowed. This allows removal of a large number of ad-hoc
checks scattered around the code base. The enum type is sufficiently
fine-grained that we can still produce error messages of at least the
same specificity as before.
Bringing these error checks together revealed that we'd been none too
consistent about phrasing of the error messages, so standardize the wording
a bit.
Also, rewrite checking of aggregate arguments so that it requires only one
traversal of the arguments, rather than up to three as before.
In passing, clean up some more comments left over from add_missing_from
support, and annotate some tests that I think are dead code now that that's
gone. (I didn't risk actually removing said dead code, though.)
2012-08-10 17:35:33 +02:00
|
|
|
def = transformExpr(pstate, fp->defexpr,
|
|
|
|
EXPR_KIND_FUNCTION_DEFAULT);
|
2008-12-18 19:20:35 +01:00
|
|
|
def = coerce_to_specific_type(pstate, def, toid, "DEFAULT");
|
2011-03-20 01:29:08 +01:00
|
|
|
assign_expr_collations(pstate, def);
|
2008-12-18 19:20:35 +01:00
|
|
|
|
|
|
|
/*
|
Centralize the logic for detecting misplaced aggregates, window funcs, etc.
Formerly we relied on checking after-the-fact to see if an expression
contained aggregates, window functions, or sub-selects when it shouldn't.
This is grotty, easily forgotten (indeed, we had forgotten to teach
DefineIndex about rejecting window functions), and none too efficient
since it requires extra traversals of the parse tree. To improve matters,
define an enum type that classifies all SQL sub-expressions, store it in
ParseState to show what kind of expression we are currently parsing, and
make transformAggregateCall, transformWindowFuncCall, and transformSubLink
check the expression type and throw error if the type indicates the
construct is disallowed. This allows removal of a large number of ad-hoc
checks scattered around the code base. The enum type is sufficiently
fine-grained that we can still produce error messages of at least the
same specificity as before.
Bringing these error checks together revealed that we'd been none too
consistent about phrasing of the error messages, so standardize the wording
a bit.
Also, rewrite checking of aggregate arguments so that it requires only one
traversal of the arguments, rather than up to three as before.
In passing, clean up some more comments left over from add_missing_from
support, and annotate some tests that I think are dead code now that that's
gone. (I didn't risk actually removing said dead code, though.)
2012-08-10 17:35:33 +02:00
|
|
|
* Make sure no variables are referred to (this is probably dead
|
|
|
|
* code now that add_missing_from is history).
|
2008-12-18 19:20:35 +01:00
|
|
|
*/
|
2022-08-17 17:12:35 +02:00
|
|
|
if (pstate->p_rtable != NIL ||
|
2008-12-18 19:20:35 +01:00
|
|
|
contain_var_clause(def))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_COLUMN_REFERENCE),
|
|
|
|
errmsg("cannot use table references in parameter default value")));
|
|
|
|
|
|
|
|
/*
|
Centralize the logic for detecting misplaced aggregates, window funcs, etc.
Formerly we relied on checking after-the-fact to see if an expression
contained aggregates, window functions, or sub-selects when it shouldn't.
This is grotty, easily forgotten (indeed, we had forgotten to teach
DefineIndex about rejecting window functions), and none too efficient
since it requires extra traversals of the parse tree. To improve matters,
define an enum type that classifies all SQL sub-expressions, store it in
ParseState to show what kind of expression we are currently parsing, and
make transformAggregateCall, transformWindowFuncCall, and transformSubLink
check the expression type and throw error if the type indicates the
construct is disallowed. This allows removal of a large number of ad-hoc
checks scattered around the code base. The enum type is sufficiently
fine-grained that we can still produce error messages of at least the
same specificity as before.
Bringing these error checks together revealed that we'd been none too
consistent about phrasing of the error messages, so standardize the wording
a bit.
Also, rewrite checking of aggregate arguments so that it requires only one
traversal of the arguments, rather than up to three as before.
In passing, clean up some more comments left over from add_missing_from
support, and annotate some tests that I think are dead code now that that's
gone. (I didn't risk actually removing said dead code, though.)
2012-08-10 17:35:33 +02:00
|
|
|
* transformExpr() should have already rejected subqueries,
|
|
|
|
* aggregates, and window functions, based on the EXPR_KIND_ for a
|
|
|
|
* default expression.
|
|
|
|
*
|
2009-01-06 03:01:27 +01:00
|
|
|
* It can't return a set either --- but coerce_to_specific_type
|
|
|
|
* already checked that for us.
|
|
|
|
*
|
|
|
|
* Note: the point of these restrictions is to ensure that an
|
|
|
|
* expression that, on its face, hasn't got subplans, aggregates,
|
|
|
|
* etc cannot suddenly have them after function default arguments
|
|
|
|
* are inserted.
|
2008-12-18 19:20:35 +01:00
|
|
|
*/
|
2008-12-04 18:51:28 +01:00
|
|
|
|
2008-12-18 19:20:35 +01:00
|
|
|
*parameterDefaults = lappend(*parameterDefaults, def);
|
2008-12-04 18:51:28 +01:00
|
|
|
have_defaults = true;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2008-12-18 19:20:35 +01:00
|
|
|
if (isinput && have_defaults)
|
2008-12-04 18:51:28 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
2008-12-18 19:20:35 +01:00
|
|
|
errmsg("input parameters after one with a default value must also have defaults")));
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* For procedures, we also can't allow OUT parameters after one
|
|
|
|
* with a default, because the same sort of confusion arises in a
|
|
|
|
* CALL statement.
|
|
|
|
*/
|
|
|
|
if (objtype == OBJECT_PROCEDURE && have_defaults)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("procedure OUT parameters cannot appear after one with a default value")));
|
2008-12-04 18:51:28 +01:00
|
|
|
}
|
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
i++;
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
/* Now construct the proper outputs as needed */
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
*parameterTypes = buildoidvector(inTypes, inCount);
|
2005-04-01 00:46:33 +02:00
|
|
|
|
2008-07-16 03:30:23 +02:00
|
|
|
if (outCount > 0 || varCount > 0)
|
2005-04-01 00:46:33 +02:00
|
|
|
{
|
2022-07-01 10:51:45 +02:00
|
|
|
*allParameterTypes = construct_array_builtin(allTypes, parameterCount, OIDOID);
|
|
|
|
*parameterModes = construct_array_builtin(paramModes, parameterCount, CHAROID);
|
2005-04-01 00:46:33 +02:00
|
|
|
if (outCount > 1)
|
|
|
|
*requiredResultType = RECORDOID;
|
|
|
|
/* otherwise we set requiredResultType correctly above */
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
*allParameterTypes = NULL;
|
|
|
|
*parameterModes = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (have_names)
|
|
|
|
{
|
|
|
|
for (i = 0; i < parameterCount; i++)
|
|
|
|
{
|
|
|
|
if (paramNames[i] == PointerGetDatum(NULL))
|
2008-03-25 23:42:46 +01:00
|
|
|
paramNames[i] = CStringGetTextDatum("");
|
2005-04-01 00:46:33 +02:00
|
|
|
}
|
2022-07-01 10:51:45 +02:00
|
|
|
*parameterNames = construct_array_builtin(paramNames, parameterCount, TEXTOID);
|
2005-04-01 00:46:33 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
*parameterNames = NULL;
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
|
2005-03-14 01:19:37 +01:00
|
|
|
/*
|
|
|
|
* Recognize one of the options that can be passed to both CREATE
|
|
|
|
* FUNCTION and ALTER FUNCTION and return it via one of the out
|
|
|
|
* parameters. Returns true if the passed option was recognized. If
|
|
|
|
* the out parameter we were going to assign to points to non-NULL,
|
2007-09-03 02:39:26 +02:00
|
|
|
* raise a duplicate-clause error. (We don't try to detect duplicate
|
|
|
|
* SET parameters though --- if you're redundant, the last one wins.)
|
2005-03-14 01:19:37 +01:00
|
|
|
*/
|
|
|
|
static bool
|
2016-09-06 18:00:00 +02:00
|
|
|
compute_common_attribute(ParseState *pstate,
|
2017-11-30 14:46:13 +01:00
|
|
|
bool is_procedure,
|
2016-09-06 18:00:00 +02:00
|
|
|
DefElem *defel,
|
2005-03-14 01:19:37 +01:00
|
|
|
DefElem **volatility_item,
|
|
|
|
DefElem **strict_item,
|
2007-01-22 02:35:23 +01:00
|
|
|
DefElem **security_item,
|
2012-02-14 04:20:27 +01:00
|
|
|
DefElem **leakproof_item,
|
2007-09-03 02:39:26 +02:00
|
|
|
List **set_items,
|
2007-01-22 02:35:23 +01:00
|
|
|
DefElem **cost_item,
|
2015-09-16 21:38:47 +02:00
|
|
|
DefElem **rows_item,
|
2019-02-10 00:08:48 +01:00
|
|
|
DefElem **support_item,
|
2015-09-16 21:38:47 +02:00
|
|
|
DefElem **parallel_item)
|
2005-03-14 01:19:37 +01:00
|
|
|
{
|
|
|
|
if (strcmp(defel->defname, "volatility") == 0)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
goto procedure_error;
|
2005-03-14 01:19:37 +01:00
|
|
|
if (*volatility_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2005-03-14 01:19:37 +01:00
|
|
|
|
|
|
|
*volatility_item = defel;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "strict") == 0)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
goto procedure_error;
|
2005-03-14 01:19:37 +01:00
|
|
|
if (*strict_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2005-03-14 01:19:37 +01:00
|
|
|
|
|
|
|
*strict_item = defel;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "security") == 0)
|
|
|
|
{
|
|
|
|
if (*security_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2005-03-14 01:19:37 +01:00
|
|
|
|
|
|
|
*security_item = defel;
|
|
|
|
}
|
2012-02-14 04:20:27 +01:00
|
|
|
else if (strcmp(defel->defname, "leakproof") == 0)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
goto procedure_error;
|
2012-02-14 04:20:27 +01:00
|
|
|
if (*leakproof_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2012-02-14 04:20:27 +01:00
|
|
|
|
|
|
|
*leakproof_item = defel;
|
|
|
|
}
|
2007-09-03 02:39:26 +02:00
|
|
|
else if (strcmp(defel->defname, "set") == 0)
|
|
|
|
{
|
|
|
|
*set_items = lappend(*set_items, defel->arg);
|
|
|
|
}
|
2007-01-22 02:35:23 +01:00
|
|
|
else if (strcmp(defel->defname, "cost") == 0)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
goto procedure_error;
|
2007-01-22 02:35:23 +01:00
|
|
|
if (*cost_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2007-01-22 02:35:23 +01:00
|
|
|
|
|
|
|
*cost_item = defel;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "rows") == 0)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
goto procedure_error;
|
2007-01-22 02:35:23 +01:00
|
|
|
if (*rows_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2007-01-22 02:35:23 +01:00
|
|
|
|
|
|
|
*rows_item = defel;
|
|
|
|
}
|
2019-02-10 00:08:48 +01:00
|
|
|
else if (strcmp(defel->defname, "support") == 0)
|
|
|
|
{
|
|
|
|
if (is_procedure)
|
|
|
|
goto procedure_error;
|
|
|
|
if (*support_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2019-02-10 00:08:48 +01:00
|
|
|
|
|
|
|
*support_item = defel;
|
|
|
|
}
|
2015-09-16 21:38:47 +02:00
|
|
|
else if (strcmp(defel->defname, "parallel") == 0)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
goto procedure_error;
|
2015-09-16 21:38:47 +02:00
|
|
|
if (*parallel_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2015-09-16 21:38:47 +02:00
|
|
|
|
|
|
|
*parallel_item = defel;
|
|
|
|
}
|
2005-03-14 01:19:37 +01:00
|
|
|
else
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* Recognized an option */
|
|
|
|
return true;
|
|
|
|
|
2017-11-30 14:46:13 +01:00
|
|
|
procedure_error:
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("invalid attribute in procedure definition"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
|
|
|
return false;
|
2005-03-14 01:19:37 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
static char
|
|
|
|
interpret_func_volatility(DefElem *defel)
|
|
|
|
{
|
|
|
|
char *str = strVal(defel->arg);
|
|
|
|
|
|
|
|
if (strcmp(str, "immutable") == 0)
|
|
|
|
return PROVOLATILE_IMMUTABLE;
|
|
|
|
else if (strcmp(str, "stable") == 0)
|
|
|
|
return PROVOLATILE_STABLE;
|
|
|
|
else if (strcmp(str, "volatile") == 0)
|
|
|
|
return PROVOLATILE_VOLATILE;
|
|
|
|
else
|
|
|
|
{
|
|
|
|
elog(ERROR, "invalid volatility \"%s\"", str);
|
|
|
|
return 0; /* keep compiler quiet */
|
|
|
|
}
|
|
|
|
}
|
2002-05-17 20:32:52 +02:00
|
|
|
|
2015-09-16 21:38:47 +02:00
|
|
|
static char
|
|
|
|
interpret_func_parallel(DefElem *defel)
|
|
|
|
{
|
|
|
|
char *str = strVal(defel->arg);
|
|
|
|
|
|
|
|
if (strcmp(str, "safe") == 0)
|
|
|
|
return PROPARALLEL_SAFE;
|
|
|
|
else if (strcmp(str, "unsafe") == 0)
|
|
|
|
return PROPARALLEL_UNSAFE;
|
|
|
|
else if (strcmp(str, "restricted") == 0)
|
|
|
|
return PROPARALLEL_RESTRICTED;
|
|
|
|
else
|
|
|
|
{
|
|
|
|
ereport(ERROR,
|
2016-04-05 22:06:15 +02:00
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("parameter \"parallel\" must be SAFE, RESTRICTED, or UNSAFE")));
|
2015-09-16 21:38:47 +02:00
|
|
|
return PROPARALLEL_UNSAFE; /* keep compiler quiet */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-09-03 02:39:26 +02:00
|
|
|
/*
|
2007-09-03 20:46:30 +02:00
|
|
|
* Update a proconfig value according to a list of VariableSetStmt items.
|
2007-09-03 02:39:26 +02:00
|
|
|
*
|
|
|
|
* The input and result may be NULL to signify a null entry.
|
|
|
|
*/
|
|
|
|
static ArrayType *
|
|
|
|
update_proconfig_value(ArrayType *a, List *set_items)
|
|
|
|
{
|
|
|
|
ListCell *l;
|
|
|
|
|
|
|
|
foreach(l, set_items)
|
|
|
|
{
|
Improve castNode notation by introducing list-extraction-specific variants.
This extends the castNode() notation introduced by commit 5bcab1114 to
provide, in one step, extraction of a list cell's pointer and coercion to
a concrete node type. For example, "lfirst_node(Foo, lc)" is the same
as "castNode(Foo, lfirst(lc))". Almost half of the uses of castNode
that have appeared so far include a list extraction call, so this is
pretty widely useful, and it saves a few more keystrokes compared to the
old way.
As with the previous patch, back-patch the addition of these macros to
pg_list.h, so that the notation will be available when back-patching.
Patch by me, after an idea of Andrew Gierth's.
Discussion: https://postgr.es/m/14197.1491841216@sss.pgh.pa.us
2017-04-10 19:51:29 +02:00
|
|
|
VariableSetStmt *sstmt = lfirst_node(VariableSetStmt, l);
|
2007-09-03 02:39:26 +02:00
|
|
|
|
2007-09-03 20:46:30 +02:00
|
|
|
if (sstmt->kind == VAR_RESET_ALL)
|
|
|
|
a = NULL;
|
|
|
|
else
|
2007-09-03 02:39:26 +02:00
|
|
|
{
|
2007-09-03 20:46:30 +02:00
|
|
|
char *valuestr = ExtractSetVariableArgs(sstmt);
|
2007-09-03 02:39:26 +02:00
|
|
|
|
2007-09-03 20:46:30 +02:00
|
|
|
if (valuestr)
|
2023-05-17 19:06:50 +02:00
|
|
|
a = GUCArrayAdd(a, sstmt->name, valuestr);
|
2007-09-03 20:46:30 +02:00
|
|
|
else /* RESET */
|
2023-05-17 19:06:50 +02:00
|
|
|
a = GUCArrayDelete(a, sstmt->name);
|
2007-09-03 02:39:26 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return a;
|
|
|
|
}
|
|
|
|
|
2019-02-10 00:08:48 +01:00
|
|
|
static Oid
|
|
|
|
interpret_func_support(DefElem *defel)
|
|
|
|
{
|
|
|
|
List *procName = defGetQualifiedName(defel);
|
|
|
|
Oid procOid;
|
|
|
|
Oid argList[1];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Support functions always take one INTERNAL argument and return
|
|
|
|
* INTERNAL.
|
|
|
|
*/
|
|
|
|
argList[0] = INTERNALOID;
|
|
|
|
|
|
|
|
procOid = LookupFuncName(procName, 1, argList, true);
|
|
|
|
if (!OidIsValid(procOid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_FUNCTION),
|
|
|
|
errmsg("function %s does not exist",
|
|
|
|
func_signature_string(procName, 1, NIL, argList))));
|
|
|
|
|
|
|
|
if (get_func_rettype(procOid) != INTERNALOID)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("support function %s must return type %s",
|
|
|
|
NameListToString(procName), "internal")));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Someday we might want an ACL check here; but for now, we insist that
|
|
|
|
* you be superuser to specify a support function, so privilege on the
|
|
|
|
* support function is moot.
|
|
|
|
*/
|
|
|
|
if (!superuser())
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
|
|
|
errmsg("must be superuser to specify a support function")));
|
|
|
|
|
|
|
|
return procOid;
|
|
|
|
}
|
|
|
|
|
2007-09-03 02:39:26 +02:00
|
|
|
|
2002-05-17 20:32:52 +02:00
|
|
|
/*
|
|
|
|
* Dissect the list of options assembled in gram.y into function
|
|
|
|
* attributes.
|
|
|
|
*/
|
|
|
|
static void
|
2018-01-26 18:25:44 +01:00
|
|
|
compute_function_attributes(ParseState *pstate,
|
|
|
|
bool is_procedure,
|
|
|
|
List *options,
|
|
|
|
List **as,
|
|
|
|
char **language,
|
|
|
|
Node **transform,
|
|
|
|
bool *windowfunc_p,
|
|
|
|
char *volatility_p,
|
|
|
|
bool *strict_p,
|
|
|
|
bool *security_definer,
|
|
|
|
bool *leakproof_p,
|
|
|
|
ArrayType **proconfig,
|
|
|
|
float4 *procost,
|
|
|
|
float4 *prorows,
|
2019-02-10 00:08:48 +01:00
|
|
|
Oid *prosupport,
|
2018-01-26 18:25:44 +01:00
|
|
|
char *parallel_p)
|
2002-05-17 20:32:52 +02:00
|
|
|
{
|
2004-05-26 06:41:50 +02:00
|
|
|
ListCell *option;
|
2002-05-17 20:32:52 +02:00
|
|
|
DefElem *as_item = NULL;
|
|
|
|
DefElem *language_item = NULL;
|
2015-04-26 16:33:14 +02:00
|
|
|
DefElem *transform_item = NULL;
|
2008-12-31 03:25:06 +01:00
|
|
|
DefElem *windowfunc_item = NULL;
|
2002-05-17 20:32:52 +02:00
|
|
|
DefElem *volatility_item = NULL;
|
|
|
|
DefElem *strict_item = NULL;
|
|
|
|
DefElem *security_item = NULL;
|
2012-02-14 04:20:27 +01:00
|
|
|
DefElem *leakproof_item = NULL;
|
2007-09-03 02:39:26 +02:00
|
|
|
List *set_items = NIL;
|
2007-01-22 02:35:23 +01:00
|
|
|
DefElem *cost_item = NULL;
|
|
|
|
DefElem *rows_item = NULL;
|
2019-02-10 00:08:48 +01:00
|
|
|
DefElem *support_item = NULL;
|
2015-09-16 21:38:47 +02:00
|
|
|
DefElem *parallel_item = NULL;
|
2002-05-17 20:32:52 +02:00
|
|
|
|
|
|
|
foreach(option, options)
|
|
|
|
{
|
|
|
|
DefElem *defel = (DefElem *) lfirst(option);
|
|
|
|
|
|
|
|
if (strcmp(defel->defname, "as") == 0)
|
|
|
|
{
|
|
|
|
if (as_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2002-05-17 20:32:52 +02:00
|
|
|
as_item = defel;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "language") == 0)
|
|
|
|
{
|
|
|
|
if (language_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2002-05-17 20:32:52 +02:00
|
|
|
language_item = defel;
|
|
|
|
}
|
2015-04-26 16:33:14 +02:00
|
|
|
else if (strcmp(defel->defname, "transform") == 0)
|
|
|
|
{
|
|
|
|
if (transform_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2015-04-26 16:33:14 +02:00
|
|
|
transform_item = defel;
|
|
|
|
}
|
2008-12-31 03:25:06 +01:00
|
|
|
else if (strcmp(defel->defname, "window") == 0)
|
|
|
|
{
|
|
|
|
if (windowfunc_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("invalid attribute in procedure definition"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2008-12-31 03:25:06 +01:00
|
|
|
windowfunc_item = defel;
|
|
|
|
}
|
2016-09-06 18:00:00 +02:00
|
|
|
else if (compute_common_attribute(pstate,
|
2017-11-30 14:46:13 +01:00
|
|
|
is_procedure,
|
2016-09-06 18:00:00 +02:00
|
|
|
defel,
|
2005-03-14 01:19:37 +01:00
|
|
|
&volatility_item,
|
|
|
|
&strict_item,
|
2007-01-22 02:35:23 +01:00
|
|
|
&security_item,
|
2012-02-14 04:20:27 +01:00
|
|
|
&leakproof_item,
|
2007-09-03 02:39:26 +02:00
|
|
|
&set_items,
|
2007-01-22 02:35:23 +01:00
|
|
|
&cost_item,
|
2015-09-16 21:38:47 +02:00
|
|
|
&rows_item,
|
2019-02-10 00:08:48 +01:00
|
|
|
&support_item,
|
2015-09-16 21:38:47 +02:00
|
|
|
¶llel_item))
|
2002-05-17 20:32:52 +02:00
|
|
|
{
|
2005-03-14 01:19:37 +01:00
|
|
|
/* recognized common option */
|
|
|
|
continue;
|
2002-05-17 20:32:52 +02:00
|
|
|
}
|
|
|
|
else
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "option \"%s\" not recognized",
|
|
|
|
defel->defname);
|
2002-05-17 20:32:52 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
if (as_item)
|
|
|
|
*as = (List *) as_item->arg;
|
|
|
|
if (language_item)
|
|
|
|
*language = strVal(language_item->arg);
|
2015-04-26 16:33:14 +02:00
|
|
|
if (transform_item)
|
|
|
|
*transform = transform_item->arg;
|
2008-12-31 03:25:06 +01:00
|
|
|
if (windowfunc_item)
|
2022-01-14 10:46:49 +01:00
|
|
|
*windowfunc_p = boolVal(windowfunc_item->arg);
|
2002-05-17 20:32:52 +02:00
|
|
|
if (volatility_item)
|
2005-03-14 01:19:37 +01:00
|
|
|
*volatility_p = interpret_func_volatility(volatility_item);
|
2002-05-17 20:32:52 +02:00
|
|
|
if (strict_item)
|
2022-01-14 10:46:49 +01:00
|
|
|
*strict_p = boolVal(strict_item->arg);
|
2002-05-17 20:32:52 +02:00
|
|
|
if (security_item)
|
2022-01-14 10:46:49 +01:00
|
|
|
*security_definer = boolVal(security_item->arg);
|
2012-02-14 04:20:27 +01:00
|
|
|
if (leakproof_item)
|
2022-01-14 10:46:49 +01:00
|
|
|
*leakproof_p = boolVal(leakproof_item->arg);
|
2007-09-03 02:39:26 +02:00
|
|
|
if (set_items)
|
|
|
|
*proconfig = update_proconfig_value(NULL, set_items);
|
2007-01-22 02:35:23 +01:00
|
|
|
if (cost_item)
|
|
|
|
{
|
|
|
|
*procost = defGetNumeric(cost_item);
|
|
|
|
if (*procost <= 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("COST must be positive")));
|
|
|
|
}
|
|
|
|
if (rows_item)
|
|
|
|
{
|
|
|
|
*prorows = defGetNumeric(rows_item);
|
|
|
|
if (*prorows <= 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("ROWS must be positive")));
|
|
|
|
}
|
2019-02-10 00:08:48 +01:00
|
|
|
if (support_item)
|
|
|
|
*prosupport = interpret_func_support(support_item);
|
2015-09-16 21:38:47 +02:00
|
|
|
if (parallel_item)
|
|
|
|
*parallel_p = interpret_func_parallel(parallel_item);
|
2002-05-17 20:32:52 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
|
|
|
* For a dynamically linked C language object, the form of the clause is
|
|
|
|
*
|
|
|
|
* AS <object file name> [, <link symbol name> ]
|
|
|
|
*
|
|
|
|
* In all other cases
|
|
|
|
*
|
|
|
|
* AS <object reference, or sql code>
|
|
|
|
*/
|
|
|
|
static void
|
2008-07-16 18:55:24 +02:00
|
|
|
interpret_AS_clause(Oid languageOid, const char *languageName,
|
2021-04-07 21:30:08 +02:00
|
|
|
char *funcname, List *as, Node *sql_body_in,
|
|
|
|
List *parameterTypes, List *inParameterNames,
|
2021-04-15 23:24:12 +02:00
|
|
|
char **prosrc_str_p, char **probin_str_p,
|
|
|
|
Node **sql_body_out,
|
|
|
|
const char *queryString)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2021-04-07 21:30:08 +02:00
|
|
|
if (!sql_body_in && !as)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("no function body specified")));
|
|
|
|
|
|
|
|
if (sql_body_in && as)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("duplicate function body specified")));
|
|
|
|
|
|
|
|
if (sql_body_in && languageOid != SQLlanguageId)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("inline SQL function body only valid for language SQL")));
|
|
|
|
|
|
|
|
*sql_body_out = NULL;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
if (languageOid == ClanguageId)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* For "C" language, store the file name in probin and, when given,
|
2008-07-16 18:55:24 +02:00
|
|
|
* the link symbol name in prosrc. If link symbol is omitted,
|
|
|
|
* substitute procedure name. We also allow link symbol to be
|
|
|
|
* specified as "-", since that was the habit in PG versions before
|
|
|
|
* 8.4, and there might be dump files out there that don't translate
|
|
|
|
* that back to "omitted".
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
2004-05-26 06:41:50 +02:00
|
|
|
*probin_str_p = strVal(linitial(as));
|
|
|
|
if (list_length(as) == 1)
|
2008-07-16 18:55:24 +02:00
|
|
|
*prosrc_str_p = funcname;
|
2002-04-15 07:22:04 +02:00
|
|
|
else
|
2008-07-16 18:55:24 +02:00
|
|
|
{
|
2002-04-15 07:22:04 +02:00
|
|
|
*prosrc_str_p = strVal(lsecond(as));
|
2008-07-16 18:55:24 +02:00
|
|
|
if (strcmp(*prosrc_str_p, "-") == 0)
|
|
|
|
*prosrc_str_p = funcname;
|
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
2021-04-07 21:30:08 +02:00
|
|
|
else if (sql_body_in)
|
|
|
|
{
|
|
|
|
SQLFunctionParseInfoPtr pinfo;
|
|
|
|
|
|
|
|
pinfo = (SQLFunctionParseInfoPtr) palloc0(sizeof(SQLFunctionParseInfo));
|
|
|
|
|
|
|
|
pinfo->fname = funcname;
|
|
|
|
pinfo->nargs = list_length(parameterTypes);
|
|
|
|
pinfo->argtypes = (Oid *) palloc(pinfo->nargs * sizeof(Oid));
|
|
|
|
pinfo->argnames = (char **) palloc(pinfo->nargs * sizeof(char *));
|
|
|
|
for (int i = 0; i < list_length(parameterTypes); i++)
|
|
|
|
{
|
|
|
|
char *s = strVal(list_nth(inParameterNames, i));
|
|
|
|
|
|
|
|
pinfo->argtypes[i] = list_nth_oid(parameterTypes, i);
|
|
|
|
if (IsPolymorphicType(pinfo->argtypes[i]))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("SQL function with unquoted function body cannot have polymorphic arguments")));
|
|
|
|
|
|
|
|
if (s[0] != '\0')
|
|
|
|
pinfo->argnames[i] = s;
|
|
|
|
else
|
|
|
|
pinfo->argnames[i] = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (IsA(sql_body_in, List))
|
|
|
|
{
|
|
|
|
List *stmts = linitial_node(List, castNode(List, sql_body_in));
|
|
|
|
ListCell *lc;
|
|
|
|
List *transformed_stmts = NIL;
|
|
|
|
|
|
|
|
foreach(lc, stmts)
|
|
|
|
{
|
|
|
|
Node *stmt = lfirst(lc);
|
|
|
|
Query *q;
|
|
|
|
ParseState *pstate = make_parsestate(NULL);
|
|
|
|
|
2021-04-15 23:24:12 +02:00
|
|
|
pstate->p_sourcetext = queryString;
|
2021-04-07 21:30:08 +02:00
|
|
|
sql_fn_parser_setup(pstate, pinfo);
|
|
|
|
q = transformStmt(pstate, stmt);
|
|
|
|
if (q->commandType == CMD_UTILITY)
|
|
|
|
ereport(ERROR,
|
|
|
|
errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("%s is not yet supported in unquoted SQL function body",
|
|
|
|
GetCommandTagName(CreateCommandTag(q->utilityStmt))));
|
|
|
|
transformed_stmts = lappend(transformed_stmts, q);
|
|
|
|
free_parsestate(pstate);
|
|
|
|
}
|
|
|
|
|
|
|
|
*sql_body_out = (Node *) list_make1(transformed_stmts);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
Query *q;
|
|
|
|
ParseState *pstate = make_parsestate(NULL);
|
|
|
|
|
2021-04-15 23:24:12 +02:00
|
|
|
pstate->p_sourcetext = queryString;
|
2021-04-07 21:30:08 +02:00
|
|
|
sql_fn_parser_setup(pstate, pinfo);
|
|
|
|
q = transformStmt(pstate, sql_body_in);
|
|
|
|
if (q->commandType == CMD_UTILITY)
|
|
|
|
ereport(ERROR,
|
|
|
|
errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("%s is not yet supported in unquoted SQL function body",
|
|
|
|
GetCommandTagName(CreateCommandTag(q->utilityStmt))));
|
2021-04-15 23:24:12 +02:00
|
|
|
free_parsestate(pstate);
|
2021-04-07 21:30:08 +02:00
|
|
|
|
|
|
|
*sql_body_out = (Node *) q;
|
|
|
|
}
|
|
|
|
|
2021-04-15 23:17:20 +02:00
|
|
|
/*
|
|
|
|
* We must put something in prosrc. For the moment, just record an
|
|
|
|
* empty string. It might be useful to store the original text of the
|
|
|
|
* CREATE FUNCTION statement --- but to make actual use of that in
|
|
|
|
* error reports, we'd also have to adjust readfuncs.c to not throw
|
|
|
|
* away node location fields when reading prosqlbody.
|
|
|
|
*/
|
|
|
|
*prosrc_str_p = pstrdup("");
|
|
|
|
|
|
|
|
/* But we definitely don't need probin. */
|
2021-04-07 21:30:08 +02:00
|
|
|
*probin_str_p = NULL;
|
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Everything else wants the given string in prosrc. */
|
2004-05-26 06:41:50 +02:00
|
|
|
*prosrc_str_p = strVal(linitial(as));
|
2008-07-16 18:55:24 +02:00
|
|
|
*probin_str_p = NULL;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2004-05-26 06:41:50 +02:00
|
|
|
if (list_length(as) != 1)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("only one AS item needed for language \"%s\"",
|
|
|
|
languageName)));
|
2008-07-16 18:55:24 +02:00
|
|
|
|
|
|
|
if (languageOid == INTERNALlanguageId)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* In PostgreSQL versions before 6.5, the SQL name of the created
|
|
|
|
* function could not be different from the internal name, and
|
|
|
|
* "prosrc" wasn't used. So there is code out there that does
|
|
|
|
* CREATE FUNCTION xyz AS '' LANGUAGE internal. To preserve some
|
|
|
|
* modicum of backwards compatibility, accept an empty "prosrc"
|
|
|
|
* value as meaning the supplied SQL function name.
|
|
|
|
*/
|
|
|
|
if (strlen(*prosrc_str_p) == 0)
|
|
|
|
*prosrc_str_p = funcname;
|
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* CreateFunction
|
2018-01-26 18:25:44 +01:00
|
|
|
* Execute a CREATE FUNCTION (or CREATE PROCEDURE) utility statement.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2016-09-06 18:00:00 +02:00
|
|
|
CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
|
|
|
char *probin_str;
|
|
|
|
char *prosrc_str;
|
2021-04-07 21:30:08 +02:00
|
|
|
Node *prosqlbody;
|
2002-04-15 07:22:04 +02:00
|
|
|
Oid prorettype;
|
|
|
|
bool returnsSet;
|
2002-05-17 20:32:52 +02:00
|
|
|
char *language;
|
2002-04-15 07:22:04 +02:00
|
|
|
Oid languageOid;
|
2002-05-22 19:21:02 +02:00
|
|
|
Oid languageValidator;
|
2015-04-26 16:33:14 +02:00
|
|
|
Node *transformDefElem = NULL;
|
2002-04-15 07:22:04 +02:00
|
|
|
char *funcname;
|
|
|
|
Oid namespaceId;
|
2002-04-27 05:45:03 +02:00
|
|
|
AclResult aclresult;
|
2005-04-01 00:46:33 +02:00
|
|
|
oidvector *parameterTypes;
|
2021-04-07 21:30:08 +02:00
|
|
|
List *parameterTypes_list = NIL;
|
2005-04-01 00:46:33 +02:00
|
|
|
ArrayType *allParameterTypes;
|
|
|
|
ArrayType *parameterModes;
|
|
|
|
ArrayType *parameterNames;
|
2021-04-07 21:30:08 +02:00
|
|
|
List *inParameterNames_list = NIL;
|
2008-12-18 19:20:35 +01:00
|
|
|
List *parameterDefaults;
|
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
|
|
|
Oid variadicArgType;
|
2015-04-26 16:33:14 +02:00
|
|
|
List *trftypes_list = NIL;
|
|
|
|
ArrayType *trftypes;
|
2005-04-01 00:46:33 +02:00
|
|
|
Oid requiredResultType;
|
2008-12-31 03:25:06 +01:00
|
|
|
bool isWindowFunc,
|
|
|
|
isStrict,
|
2012-02-14 04:20:27 +01:00
|
|
|
security,
|
|
|
|
isLeakProof;
|
2002-04-15 07:22:04 +02:00
|
|
|
char volatility;
|
2007-09-03 02:39:26 +02:00
|
|
|
ArrayType *proconfig;
|
2007-01-22 02:35:23 +01:00
|
|
|
float4 procost;
|
|
|
|
float4 prorows;
|
2019-02-10 00:08:48 +01:00
|
|
|
Oid prosupport;
|
2002-04-15 07:22:04 +02:00
|
|
|
HeapTuple languageTuple;
|
|
|
|
Form_pg_language languageStruct;
|
2002-05-17 20:32:52 +02:00
|
|
|
List *as_clause;
|
2015-09-16 21:38:47 +02:00
|
|
|
char parallel;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/* Convert list of names to a name and namespace */
|
|
|
|
namespaceId = QualifiedNameGetCreationNamespace(stmt->funcname,
|
|
|
|
&funcname);
|
|
|
|
|
2002-04-27 05:45:03 +02:00
|
|
|
/* Check we have creation rights in target namespace */
|
2022-11-13 08:11:17 +01:00
|
|
|
aclresult = object_aclcheck(NamespaceRelationId, namespaceId, GetUserId(), ACL_CREATE);
|
2002-04-27 05:45:03 +02:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_SCHEMA,
|
2003-08-01 02:15:26 +02:00
|
|
|
get_namespace_name(namespaceId));
|
2002-04-27 05:45:03 +02:00
|
|
|
|
2018-01-26 18:25:44 +01:00
|
|
|
/* Set default attributes */
|
2021-04-07 21:30:08 +02:00
|
|
|
as_clause = NIL;
|
|
|
|
language = NULL;
|
2008-12-31 03:25:06 +01:00
|
|
|
isWindowFunc = false;
|
2002-05-17 20:32:52 +02:00
|
|
|
isStrict = false;
|
2002-05-18 15:48:01 +02:00
|
|
|
security = false;
|
2012-02-14 04:20:27 +01:00
|
|
|
isLeakProof = false;
|
2002-05-17 20:32:52 +02:00
|
|
|
volatility = PROVOLATILE_VOLATILE;
|
2007-09-03 02:39:26 +02:00
|
|
|
proconfig = NULL;
|
2007-01-22 02:35:23 +01:00
|
|
|
procost = -1; /* indicates not set */
|
|
|
|
prorows = -1; /* indicates not set */
|
2019-02-10 00:08:48 +01:00
|
|
|
prosupport = InvalidOid;
|
2015-09-16 21:38:47 +02:00
|
|
|
parallel = PROPARALLEL_UNSAFE;
|
2002-05-17 20:32:52 +02:00
|
|
|
|
2018-01-26 18:25:44 +01:00
|
|
|
/* Extract non-default attributes from stmt->options list */
|
|
|
|
compute_function_attributes(pstate,
|
|
|
|
stmt->is_procedure,
|
|
|
|
stmt->options,
|
|
|
|
&as_clause, &language, &transformDefElem,
|
|
|
|
&isWindowFunc, &volatility,
|
|
|
|
&isStrict, &security, &isLeakProof,
|
2019-02-10 00:08:48 +01:00
|
|
|
&proconfig, &procost, &prorows,
|
|
|
|
&prosupport, ¶llel);
|
2002-05-17 20:32:52 +02:00
|
|
|
|
2021-04-07 21:30:08 +02:00
|
|
|
if (!language)
|
|
|
|
{
|
|
|
|
if (stmt->sql_body)
|
|
|
|
language = "sql";
|
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("no language specified")));
|
|
|
|
}
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/* Look up the language and validate permissions */
|
2011-11-17 20:20:13 +01:00
|
|
|
languageTuple = SearchSysCache1(LANGNAME, PointerGetDatum(language));
|
2002-04-15 07:22:04 +02:00
|
|
|
if (!HeapTupleIsValid(languageTuple))
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
2011-11-17 20:20:13 +01:00
|
|
|
errmsg("language \"%s\" does not exist", language),
|
Invent "trusted" extensions, and remove the pg_pltemplate catalog.
This patch creates a new extension property, "trusted". An extension
that's marked that way in its control file can be installed by a
non-superuser who has the CREATE privilege on the current database,
even if the extension contains objects that normally would have to be
created by a superuser. The objects within the extension will (by
default) be owned by the bootstrap superuser, but the extension itself
will be owned by the calling user. This allows replicating the old
behavior around trusted procedural languages, without all the
special-case logic in CREATE LANGUAGE. We have, however, chosen to
loosen the rules slightly: formerly, only a database owner could take
advantage of the special case that allowed installation of a trusted
language, but now anyone who has CREATE privilege can do so.
Having done that, we can delete the pg_pltemplate catalog, moving the
knowledge it contained into the extension script files for the various
PLs. This ends up being no change at all for the in-core PLs, but it is
a large step forward for external PLs: they can now have the same ease
of installation as core PLs do. The old "trusted PL" behavior was only
available to PLs that had entries in pg_pltemplate, but now any
extension can be marked trusted if appropriate.
This also removes one of the stumbling blocks for our Python 2 -> 3
migration, since the association of "plpythonu" with Python 2 is no
longer hard-wired into pg_pltemplate's initial contents. Exactly where
we go from here on that front remains to be settled, but one problem
is fixed.
Patch by me, reviewed by Peter Eisentraut, Stephen Frost, and others.
Discussion: https://postgr.es/m/5889.1566415762@sss.pgh.pa.us
2020-01-30 00:42:43 +01:00
|
|
|
(extension_file_exists(language) ?
|
2018-04-27 19:42:03 +02:00
|
|
|
errhint("Use CREATE EXTENSION to load the language into the database.") : 0)));
|
2004-08-29 07:07:03 +02:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
languageStruct = (Form_pg_language) GETSTRUCT(languageTuple);
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
languageOid = languageStruct->oid;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2002-04-27 05:45:03 +02:00
|
|
|
if (languageStruct->lanpltrusted)
|
|
|
|
{
|
|
|
|
/* if trusted language, need USAGE privilege */
|
2022-11-13 08:11:17 +01:00
|
|
|
aclresult = object_aclcheck(LanguageRelationId, languageOid, GetUserId(), ACL_USAGE);
|
2002-04-27 05:45:03 +02:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_LANGUAGE,
|
2003-08-01 02:15:26 +02:00
|
|
|
NameStr(languageStruct->lanname));
|
2002-04-27 05:45:03 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* if untrusted language, must be superuser */
|
|
|
|
if (!superuser())
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NO_PRIV, OBJECT_LANGUAGE,
|
2003-08-01 02:15:26 +02:00
|
|
|
NameStr(languageStruct->lanname));
|
2002-04-27 05:45:03 +02:00
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2002-05-22 19:21:02 +02:00
|
|
|
languageValidator = languageStruct->lanvalidator;
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
ReleaseSysCache(languageTuple);
|
|
|
|
|
2012-02-14 04:20:27 +01:00
|
|
|
/*
|
Rename pg_rowsecurity -> pg_policy and other fixes
As pointed out by Robert, we should really have named pg_rowsecurity
pg_policy, as the objects stored in that catalog are policies. This
patch fixes that and updates the column names to start with 'pol' to
match the new catalog name.
The security consideration for COPY with row level security, also
pointed out by Robert, has also been addressed by remembering and
re-checking the OID of the relation initially referenced during COPY
processing, to make sure it hasn't changed under us by the time we
finish planning out the query which has been built.
Robert and Alvaro also commented on missing OCLASS and OBJECT entries
for POLICY (formerly ROWSECURITY or POLICY, depending) in various
places. This patch fixes that too, which also happens to add the
ability to COMMENT on policies.
In passing, attempt to improve the consistency of messages, comments,
and documentation as well. This removes various incarnations of
'row-security', 'row-level security', 'Row-security', etc, in favor
of 'policy', 'row level security' or 'row_security' as appropriate.
Happy Thanksgiving!
2014-11-27 07:06:36 +01:00
|
|
|
* Only superuser is allowed to create leakproof functions because
|
|
|
|
* leakproof functions can see tuples which have not yet been filtered out
|
2021-04-21 08:14:43 +02:00
|
|
|
* by security barrier views or row-level security policies.
|
2012-02-14 04:20:27 +01:00
|
|
|
*/
|
|
|
|
if (isLeakProof && !superuser())
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
|
|
|
errmsg("only superuser can define a leakproof function")));
|
|
|
|
|
2015-04-26 16:33:14 +02:00
|
|
|
if (transformDefElem)
|
|
|
|
{
|
|
|
|
ListCell *lc;
|
|
|
|
|
2017-01-27 01:47:03 +01:00
|
|
|
foreach(lc, castNode(List, transformDefElem))
|
2015-04-26 16:33:14 +02:00
|
|
|
{
|
Improve castNode notation by introducing list-extraction-specific variants.
This extends the castNode() notation introduced by commit 5bcab1114 to
provide, in one step, extraction of a list cell's pointer and coercion to
a concrete node type. For example, "lfirst_node(Foo, lc)" is the same
as "castNode(Foo, lfirst(lc))". Almost half of the uses of castNode
that have appeared so far include a list extraction call, so this is
pretty widely useful, and it saves a few more keystrokes compared to the
old way.
As with the previous patch, back-patch the addition of these macros to
pg_list.h, so that the notation will be available when back-patching.
Patch by me, after an idea of Andrew Gierth's.
Discussion: https://postgr.es/m/14197.1491841216@sss.pgh.pa.us
2017-04-10 19:51:29 +02:00
|
|
|
Oid typeid = typenameTypeId(NULL,
|
|
|
|
lfirst_node(TypeName, lc));
|
2015-04-26 16:33:14 +02:00
|
|
|
Oid elt = get_base_element_type(typeid);
|
2015-05-24 03:35:49 +02:00
|
|
|
|
2015-04-26 16:33:14 +02:00
|
|
|
typeid = elt ? elt : typeid;
|
|
|
|
|
|
|
|
get_transform_oid(typeid, languageOid, false);
|
|
|
|
trftypes_list = lappend_oid(trftypes_list, typeid);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
|
|
|
* Convert remaining parameters of CREATE to form wanted by
|
|
|
|
* ProcedureCreate.
|
|
|
|
*/
|
2016-09-06 18:00:00 +02:00
|
|
|
interpret_function_parameter_list(pstate,
|
|
|
|
stmt->parameters,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
languageOid,
|
2017-11-30 14:46:13 +01:00
|
|
|
stmt->is_procedure ? OBJECT_PROCEDURE : OBJECT_FUNCTION,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
¶meterTypes,
|
2021-04-07 21:30:08 +02:00
|
|
|
¶meterTypes_list,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
&allParameterTypes,
|
|
|
|
¶meterModes,
|
|
|
|
¶meterNames,
|
2021-04-07 21:30:08 +02:00
|
|
|
&inParameterNames_list,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
¶meterDefaults,
|
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
|
|
|
&variadicArgType,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
&requiredResultType);
|
2005-04-01 00:46:33 +02:00
|
|
|
|
2017-11-30 14:46:13 +01:00
|
|
|
if (stmt->is_procedure)
|
|
|
|
{
|
|
|
|
Assert(!stmt->returnType);
|
2018-03-14 16:47:21 +01:00
|
|
|
prorettype = requiredResultType ? requiredResultType : VOIDOID;
|
2017-11-30 14:46:13 +01:00
|
|
|
returnsSet = false;
|
|
|
|
}
|
|
|
|
else if (stmt->returnType)
|
2005-04-01 00:46:33 +02:00
|
|
|
{
|
|
|
|
/* explicit RETURNS clause */
|
|
|
|
compute_return_type(stmt->returnType, languageOid,
|
|
|
|
&prorettype, &returnsSet);
|
|
|
|
if (OidIsValid(requiredResultType) && prorettype != requiredResultType)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("function result type must be %s because of OUT parameters",
|
|
|
|
format_type_be(requiredResultType))));
|
|
|
|
}
|
|
|
|
else if (OidIsValid(requiredResultType))
|
|
|
|
{
|
|
|
|
/* default RETURNS clause from OUT parameters */
|
|
|
|
prorettype = requiredResultType;
|
|
|
|
returnsSet = false;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("function result type must be specified")));
|
|
|
|
/* Alternative possibility: default to RETURNS VOID */
|
|
|
|
prorettype = VOIDOID;
|
|
|
|
returnsSet = false;
|
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2022-08-17 17:12:35 +02:00
|
|
|
if (trftypes_list != NIL)
|
2015-04-26 16:33:14 +02:00
|
|
|
{
|
|
|
|
ListCell *lc;
|
|
|
|
Datum *arr;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
arr = palloc(list_length(trftypes_list) * sizeof(Datum));
|
|
|
|
i = 0;
|
|
|
|
foreach(lc, trftypes_list)
|
|
|
|
arr[i++] = ObjectIdGetDatum(lfirst_oid(lc));
|
2022-07-01 10:51:45 +02:00
|
|
|
trftypes = construct_array_builtin(arr, list_length(trftypes_list), OIDOID);
|
2015-04-26 16:33:14 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2017-02-06 10:33:58 +01:00
|
|
|
/* store SQL NULL instead of empty array */
|
2015-04-26 16:33:14 +02:00
|
|
|
trftypes = NULL;
|
|
|
|
}
|
|
|
|
|
2021-04-07 21:30:08 +02:00
|
|
|
interpret_AS_clause(languageOid, language, funcname, as_clause, stmt->sql_body,
|
|
|
|
parameterTypes_list, inParameterNames_list,
|
2021-04-15 23:24:12 +02:00
|
|
|
&prosrc_str, &probin_str, &prosqlbody,
|
|
|
|
pstate->p_sourcetext);
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2007-01-22 02:35:23 +01:00
|
|
|
/*
|
|
|
|
* Set default values for COST and ROWS depending on other parameters;
|
|
|
|
* reject ROWS if it's not returnsSet. NB: pg_dump knows these default
|
|
|
|
* values, keep it in sync if you change them.
|
|
|
|
*/
|
|
|
|
if (procost < 0)
|
|
|
|
{
|
|
|
|
/* SQL and PL-language functions are assumed more expensive */
|
|
|
|
if (languageOid == INTERNALlanguageId ||
|
|
|
|
languageOid == ClanguageId)
|
|
|
|
procost = 1;
|
|
|
|
else
|
|
|
|
procost = 100;
|
|
|
|
}
|
|
|
|
if (prorows < 0)
|
|
|
|
{
|
|
|
|
if (returnsSet)
|
|
|
|
prorows = 1000;
|
|
|
|
else
|
|
|
|
prorows = 0; /* dummy value if not returnsSet */
|
|
|
|
}
|
|
|
|
else if (!returnsSet)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("ROWS is not applicable when function does not return a set")));
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
|
|
|
* And now that we have all the parameters, and know we're permitted to do
|
|
|
|
* so, go ahead and create the function.
|
|
|
|
*/
|
2012-12-24 00:25:03 +01:00
|
|
|
return ProcedureCreate(funcname,
|
|
|
|
namespaceId,
|
|
|
|
stmt->replace,
|
|
|
|
returnsSet,
|
|
|
|
prorettype,
|
|
|
|
GetUserId(),
|
|
|
|
languageOid,
|
|
|
|
languageValidator,
|
|
|
|
prosrc_str, /* converted to text later */
|
|
|
|
probin_str, /* converted to text later */
|
2021-04-07 21:30:08 +02:00
|
|
|
prosqlbody,
|
2018-03-02 14:57:38 +01:00
|
|
|
stmt->is_procedure ? PROKIND_PROCEDURE : (isWindowFunc ? PROKIND_WINDOW : PROKIND_FUNCTION),
|
2012-12-24 00:25:03 +01:00
|
|
|
security,
|
|
|
|
isLeakProof,
|
|
|
|
isStrict,
|
|
|
|
volatility,
|
2015-09-16 21:38:47 +02:00
|
|
|
parallel,
|
2012-12-24 00:25:03 +01:00
|
|
|
parameterTypes,
|
|
|
|
PointerGetDatum(allParameterTypes),
|
|
|
|
PointerGetDatum(parameterModes),
|
|
|
|
PointerGetDatum(parameterNames),
|
|
|
|
parameterDefaults,
|
2015-04-26 16:33:14 +02:00
|
|
|
PointerGetDatum(trftypes),
|
2012-12-24 00:25:03 +01:00
|
|
|
PointerGetDatum(proconfig),
|
2019-02-10 00:08:48 +01:00
|
|
|
prosupport,
|
2012-12-24 00:25:03 +01:00
|
|
|
procost,
|
|
|
|
prorows);
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
2002-07-12 20:43:19 +02:00
|
|
|
/*
|
|
|
|
* Guts of function deletion.
|
|
|
|
*
|
|
|
|
* Note: this is also used for aggregate deletion, since the OIDs of
|
|
|
|
* both functions and aggregates point to pg_proc.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
RemoveFunctionById(Oid funcOid)
|
|
|
|
{
|
|
|
|
Relation relation;
|
|
|
|
HeapTuple tup;
|
2018-03-02 14:57:38 +01:00
|
|
|
char prokind;
|
2002-07-12 20:43:19 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Delete the pg_proc tuple.
|
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
relation = table_open(ProcedureRelationId, RowExclusiveLock);
|
2002-07-12 20:43:19 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcOid));
|
2002-07-12 20:43:19 +02:00
|
|
|
if (!HeapTupleIsValid(tup)) /* should not happen */
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "cache lookup failed for function %u", funcOid);
|
2002-07-12 20:43:19 +02:00
|
|
|
|
2018-03-02 14:57:38 +01:00
|
|
|
prokind = ((Form_pg_proc) GETSTRUCT(tup))->prokind;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2017-02-01 22:13:30 +01:00
|
|
|
CatalogTupleDelete(relation, &tup->t_self);
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
ReleaseSysCache(tup);
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(relation, RowExclusiveLock);
|
2002-07-12 20:43:19 +02:00
|
|
|
|
pgstat: scaffolding for transactional stats creation / drop.
One problematic part of the current statistics collector design is that there
is no reliable way of getting rid of statistics entries. Because of that
pgstat_vacuum_stat() (called by [auto-]vacuum) matches all stats for the
current database with the catalog contents and tries to drop now-superfluous
entries. That's quite expensive. What's worse, it doesn't work on physical
replicas, despite physical replicas collection statistics entries.
This commit introduces infrastructure to create / drop statistics entries
transactionally, together with the underlying catalog objects (functions,
relations, subscriptions). pgstat_xact.c maintains a list of stats entries
created / dropped transactionally in the current transaction. To ensure the
removal of statistics entries is durable dropped statistics entries are
included in commit / abort (and prepare) records, which also ensures that
stats entries are dropped on standbys.
Statistics entries created separately from creating the underlying catalog
object (e.g. when stats were previously lost due to an immediate restart)
are *not* WAL logged. However that can only happen outside of the transaction
creating the catalog object, so it does not lead to "leaked" statistics
entries.
For this to work, functions creating / dropping functions / relations /
subscriptions need to call into pgstat. For subscriptions this was already
done when dropping subscriptions, via pgstat_report_subscription_drop() (now
renamed to pgstat_drop_subscription()).
This commit does not actually drop stats yet, it just provides the
infrastructure. It is however a largely independent piece of infrastructure,
so committing it separately makes sense.
Bumps XLOG_PAGE_MAGIC.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
2022-04-07 03:22:22 +02:00
|
|
|
pgstat_drop_function(funcOid);
|
|
|
|
|
2002-07-12 20:43:19 +02:00
|
|
|
/*
|
|
|
|
* If there's a pg_aggregate tuple, delete that too.
|
|
|
|
*/
|
2018-03-02 14:57:38 +01:00
|
|
|
if (prokind == PROKIND_AGGREGATE)
|
2002-07-12 20:43:19 +02:00
|
|
|
{
|
2019-01-21 19:32:19 +01:00
|
|
|
relation = table_open(AggregateRelationId, RowExclusiveLock);
|
2002-07-12 20:43:19 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCache1(AGGFNOID, ObjectIdGetDatum(funcOid));
|
2002-07-12 20:43:19 +02:00
|
|
|
if (!HeapTupleIsValid(tup)) /* should not happen */
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "cache lookup failed for pg_aggregate tuple for function %u", funcOid);
|
2002-07-12 20:43:19 +02:00
|
|
|
|
2017-02-01 22:13:30 +01:00
|
|
|
CatalogTupleDelete(relation, &tup->t_self);
|
2002-07-12 20:43:19 +02:00
|
|
|
|
|
|
|
ReleaseSysCache(tup);
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(relation, RowExclusiveLock);
|
2002-07-12 20:43:19 +02:00
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
2002-07-19 01:11:32 +02:00
|
|
|
|
2005-03-14 01:19:37 +01:00
|
|
|
/*
|
|
|
|
* Implements the ALTER FUNCTION utility command (except for the
|
|
|
|
* RENAME and OWNER clauses, which are handled as part of the generic
|
|
|
|
* ALTER framework).
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2016-09-06 18:00:00 +02:00
|
|
|
AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt)
|
2005-03-14 01:19:37 +01:00
|
|
|
{
|
|
|
|
HeapTuple tup;
|
|
|
|
Oid funcOid;
|
|
|
|
Form_pg_proc procForm;
|
2017-11-30 14:46:13 +01:00
|
|
|
bool is_procedure;
|
2005-03-14 01:19:37 +01:00
|
|
|
Relation rel;
|
|
|
|
ListCell *l;
|
|
|
|
DefElem *volatility_item = NULL;
|
|
|
|
DefElem *strict_item = NULL;
|
|
|
|
DefElem *security_def_item = NULL;
|
2012-02-14 04:20:27 +01:00
|
|
|
DefElem *leakproof_item = NULL;
|
2007-09-03 02:39:26 +02:00
|
|
|
List *set_items = NIL;
|
2007-01-22 02:35:23 +01:00
|
|
|
DefElem *cost_item = NULL;
|
|
|
|
DefElem *rows_item = NULL;
|
2019-02-10 00:08:48 +01:00
|
|
|
DefElem *support_item = NULL;
|
2015-09-16 21:38:47 +02:00
|
|
|
DefElem *parallel_item = NULL;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2005-03-14 01:19:37 +01:00
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(ProcedureRelationId, RowExclusiveLock);
|
2005-03-14 01:19:37 +01:00
|
|
|
|
2017-11-30 14:46:13 +01:00
|
|
|
funcOid = LookupFuncWithArgs(stmt->objtype, stmt->func, false);
|
2005-03-14 01:19:37 +01:00
|
|
|
|
2019-02-10 00:08:48 +01:00
|
|
|
ObjectAddressSet(address, ProcedureRelationId, funcOid);
|
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCacheCopy1(PROCOID, ObjectIdGetDatum(funcOid));
|
2005-03-14 01:19:37 +01:00
|
|
|
if (!HeapTupleIsValid(tup)) /* should not happen */
|
|
|
|
elog(ERROR, "cache lookup failed for function %u", funcOid);
|
|
|
|
|
|
|
|
procForm = (Form_pg_proc) GETSTRUCT(tup);
|
2004-06-25 23:55:59 +02:00
|
|
|
|
2005-03-14 01:19:37 +01:00
|
|
|
/* Permission check: must own function */
|
2022-11-13 08:11:17 +01:00
|
|
|
if (!object_ownercheck(ProcedureRelationId, funcOid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, stmt->objtype,
|
2016-12-28 18:00:00 +01:00
|
|
|
NameListToString(stmt->func->objname));
|
2005-03-14 01:19:37 +01:00
|
|
|
|
2018-03-02 14:57:38 +01:00
|
|
|
if (procForm->prokind == PROKIND_AGGREGATE)
|
2005-03-14 01:19:37 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("\"%s\" is an aggregate function",
|
2016-12-28 18:00:00 +01:00
|
|
|
NameListToString(stmt->func->objname))));
|
2005-03-14 01:19:37 +01:00
|
|
|
|
2018-03-02 14:57:38 +01:00
|
|
|
is_procedure = (procForm->prokind == PROKIND_PROCEDURE);
|
2017-11-30 14:46:13 +01:00
|
|
|
|
2005-03-14 01:19:37 +01:00
|
|
|
/* Examine requested actions. */
|
|
|
|
foreach(l, stmt->actions)
|
|
|
|
{
|
|
|
|
DefElem *defel = (DefElem *) lfirst(l);
|
|
|
|
|
2016-09-06 18:00:00 +02:00
|
|
|
if (compute_common_attribute(pstate,
|
2017-11-30 14:46:13 +01:00
|
|
|
is_procedure,
|
2016-09-06 18:00:00 +02:00
|
|
|
defel,
|
2005-03-14 01:19:37 +01:00
|
|
|
&volatility_item,
|
|
|
|
&strict_item,
|
2007-01-22 02:35:23 +01:00
|
|
|
&security_def_item,
|
2012-02-14 04:20:27 +01:00
|
|
|
&leakproof_item,
|
2007-09-03 02:39:26 +02:00
|
|
|
&set_items,
|
2007-01-22 02:35:23 +01:00
|
|
|
&cost_item,
|
2015-09-16 21:38:47 +02:00
|
|
|
&rows_item,
|
2019-02-10 00:08:48 +01:00
|
|
|
&support_item,
|
2015-09-16 21:38:47 +02:00
|
|
|
¶llel_item) == false)
|
2005-03-14 01:19:37 +01:00
|
|
|
elog(ERROR, "option \"%s\" not recognized", defel->defname);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (volatility_item)
|
|
|
|
procForm->provolatile = interpret_func_volatility(volatility_item);
|
|
|
|
if (strict_item)
|
2022-01-14 10:46:49 +01:00
|
|
|
procForm->proisstrict = boolVal(strict_item->arg);
|
2005-03-14 01:19:37 +01:00
|
|
|
if (security_def_item)
|
2022-01-14 10:46:49 +01:00
|
|
|
procForm->prosecdef = boolVal(security_def_item->arg);
|
2012-02-14 04:20:27 +01:00
|
|
|
if (leakproof_item)
|
|
|
|
{
|
2022-01-14 10:46:49 +01:00
|
|
|
procForm->proleakproof = boolVal(leakproof_item->arg);
|
2015-05-28 17:24:37 +02:00
|
|
|
if (procForm->proleakproof && !superuser())
|
2012-02-14 04:20:27 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
|
|
|
errmsg("only superuser can define a leakproof function")));
|
|
|
|
}
|
2007-01-22 02:35:23 +01:00
|
|
|
if (cost_item)
|
|
|
|
{
|
|
|
|
procForm->procost = defGetNumeric(cost_item);
|
|
|
|
if (procForm->procost <= 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("COST must be positive")));
|
|
|
|
}
|
|
|
|
if (rows_item)
|
|
|
|
{
|
|
|
|
procForm->prorows = defGetNumeric(rows_item);
|
|
|
|
if (procForm->prorows <= 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("ROWS must be positive")));
|
|
|
|
if (!procForm->proretset)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("ROWS is not applicable when function does not return a set")));
|
|
|
|
}
|
2019-02-10 00:08:48 +01:00
|
|
|
if (support_item)
|
|
|
|
{
|
|
|
|
/* interpret_func_support handles the privilege check */
|
|
|
|
Oid newsupport = interpret_func_support(support_item);
|
|
|
|
|
|
|
|
/* Add or replace dependency on support function */
|
|
|
|
if (OidIsValid(procForm->prosupport))
|
|
|
|
changeDependencyFor(ProcedureRelationId, funcOid,
|
|
|
|
ProcedureRelationId, procForm->prosupport,
|
|
|
|
newsupport);
|
|
|
|
else
|
|
|
|
{
|
|
|
|
ObjectAddress referenced;
|
|
|
|
|
|
|
|
referenced.classId = ProcedureRelationId;
|
|
|
|
referenced.objectId = newsupport;
|
|
|
|
referenced.objectSubId = 0;
|
|
|
|
recordDependencyOn(&address, &referenced, DEPENDENCY_NORMAL);
|
|
|
|
}
|
|
|
|
|
|
|
|
procForm->prosupport = newsupport;
|
|
|
|
}
|
2022-04-20 05:03:59 +02:00
|
|
|
if (parallel_item)
|
|
|
|
procForm->proparallel = interpret_func_parallel(parallel_item);
|
2007-09-03 02:39:26 +02:00
|
|
|
if (set_items)
|
|
|
|
{
|
|
|
|
Datum datum;
|
|
|
|
bool isnull;
|
|
|
|
ArrayType *a;
|
|
|
|
Datum repl_val[Natts_pg_proc];
|
2008-11-02 02:45:28 +01:00
|
|
|
bool repl_null[Natts_pg_proc];
|
|
|
|
bool repl_repl[Natts_pg_proc];
|
2007-09-03 02:39:26 +02:00
|
|
|
|
|
|
|
/* extract existing proconfig setting */
|
|
|
|
datum = SysCacheGetAttr(PROCOID, tup, Anum_pg_proc_proconfig, &isnull);
|
|
|
|
a = isnull ? NULL : DatumGetArrayTypeP(datum);
|
|
|
|
|
|
|
|
/* update according to each SET or RESET item, left to right */
|
|
|
|
a = update_proconfig_value(a, set_items);
|
|
|
|
|
|
|
|
/* update the tuple */
|
2008-11-02 02:45:28 +01:00
|
|
|
memset(repl_repl, false, sizeof(repl_repl));
|
|
|
|
repl_repl[Anum_pg_proc_proconfig - 1] = true;
|
2007-09-03 02:39:26 +02:00
|
|
|
|
|
|
|
if (a == NULL)
|
|
|
|
{
|
|
|
|
repl_val[Anum_pg_proc_proconfig - 1] = (Datum) 0;
|
2008-11-02 02:45:28 +01:00
|
|
|
repl_null[Anum_pg_proc_proconfig - 1] = true;
|
2007-09-03 02:39:26 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
repl_val[Anum_pg_proc_proconfig - 1] = PointerGetDatum(a);
|
2008-11-02 02:45:28 +01:00
|
|
|
repl_null[Anum_pg_proc_proconfig - 1] = false;
|
2007-09-03 02:39:26 +02:00
|
|
|
}
|
|
|
|
|
2008-11-02 02:45:28 +01:00
|
|
|
tup = heap_modify_tuple(tup, RelationGetDescr(rel),
|
2007-09-03 02:39:26 +02:00
|
|
|
repl_val, repl_null, repl_repl);
|
|
|
|
}
|
2022-04-20 05:03:59 +02:00
|
|
|
/* DO NOT put more touches of procForm below here; it's now dangling. */
|
2005-03-14 01:19:37 +01:00
|
|
|
|
|
|
|
/* Do the update */
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(rel, &tup->t_self, tup);
|
2005-03-14 01:19:37 +01:00
|
|
|
|
2013-03-18 03:55:14 +01:00
|
|
|
InvokeObjectPostAlterHook(ProcedureRelationId, funcOid, 0);
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, NoLock);
|
2005-03-14 01:19:37 +01:00
|
|
|
heap_freetuple(tup);
|
2012-12-29 13:55:37 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2005-03-14 01:19:37 +01:00
|
|
|
}
|
2003-06-27 16:45:32 +02:00
|
|
|
|
2002-07-19 01:11:32 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* CREATE CAST
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2002-07-19 01:11:32 +02:00
|
|
|
CreateCast(CreateCastStmt *stmt)
|
|
|
|
{
|
|
|
|
Oid sourcetypeid;
|
|
|
|
Oid targettypeid;
|
2009-03-04 12:53:53 +01:00
|
|
|
char sourcetyptype;
|
|
|
|
char targettyptype;
|
2002-07-19 01:11:32 +02:00
|
|
|
Oid funcid;
|
Record dependencies of a cast on other casts that it requires.
When creating a cast that uses a conversion function, we've
historically allowed the input and result types to be
binary-compatible with the function's input and result types,
rather than necessarily being identical. This means that the new
cast is logically dependent on the binary-compatible cast or casts
that it references: if those are defined by pg_cast entries, and you
try to restore the new cast without having defined them, it'll fail.
Hence, we should make pg_depend entries to record these dependencies
so that pg_dump knows that there is an ordering requirement.
This is not the only place where we allow such shortcuts; aggregate
functions for example are similarly lax, and in principle should gain
similar dependencies. However, for now it seems sufficient to fix
the cast-versus-cast case, as pg_dump's other ordering heuristics
should keep it out of trouble for other object types.
Per report from David Turoň; thanks also to Robert Haas for
preliminary investigation. I considered back-patching, but
seeing that this issue has existed for many years without
previous reports, it's not clear it's worth the trouble.
Moreover, back-patching wouldn't be enough to ensure that the
new pg_depend entries exist in existing databases anyway.
Discussion: https://postgr.es/m/OF0A160F3E.578B15D1-ONC12588DA.003E4857-C12588DA.0045A428@notes.linuxbox.cz
2022-10-17 20:02:05 +02:00
|
|
|
Oid incastid = InvalidOid;
|
|
|
|
Oid outcastid = InvalidOid;
|
2004-06-16 03:27:00 +02:00
|
|
|
int nargs;
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
char castcontext;
|
2008-10-31 09:39:22 +01:00
|
|
|
char castmethod;
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
HeapTuple tuple;
|
2011-12-19 23:05:19 +01:00
|
|
|
AclResult aclresult;
|
2020-03-10 15:28:23 +01:00
|
|
|
ObjectAddress myself;
|
2002-07-19 01:11:32 +02:00
|
|
|
|
2010-10-25 20:40:46 +02:00
|
|
|
sourcetypeid = typenameTypeId(NULL, stmt->sourcetype);
|
|
|
|
targettypeid = typenameTypeId(NULL, stmt->targettype);
|
2009-03-04 12:53:53 +01:00
|
|
|
sourcetyptype = get_typtype(sourcetypeid);
|
|
|
|
targettyptype = get_typtype(targettypeid);
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2006-03-14 23:48:25 +01:00
|
|
|
/* No pseudo-types allowed */
|
2009-03-04 12:53:53 +01:00
|
|
|
if (sourcetyptype == TYPTYPE_PSEUDO)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("source data type %s is a pseudo-type",
|
|
|
|
TypeNameToString(stmt->sourcetype))));
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2009-03-04 12:53:53 +01:00
|
|
|
if (targettyptype == TYPTYPE_PSEUDO)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("target data type %s is a pseudo-type",
|
|
|
|
TypeNameToString(stmt->targettype))));
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2003-07-19 01:20:33 +02:00
|
|
|
/* Permission check */
|
2022-11-13 08:11:17 +01:00
|
|
|
if (!object_ownercheck(TypeRelationId, sourcetypeid, GetUserId())
|
|
|
|
&& !object_ownercheck(TypeRelationId, targettypeid, GetUserId()))
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
|
|
|
errmsg("must be owner of type %s or type %s",
|
2008-10-21 12:38:51 +02:00
|
|
|
format_type_be(sourcetypeid),
|
|
|
|
format_type_be(targettypeid))));
|
2002-08-11 19:44:12 +02:00
|
|
|
|
2022-11-13 08:11:17 +01:00
|
|
|
aclresult = object_aclcheck(TypeRelationId, sourcetypeid, GetUserId(), ACL_USAGE);
|
2011-12-19 23:05:19 +01:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
2012-06-15 21:55:03 +02:00
|
|
|
aclcheck_error_type(aclresult, sourcetypeid);
|
2011-12-19 23:05:19 +01:00
|
|
|
|
2022-11-13 08:11:17 +01:00
|
|
|
aclresult = object_aclcheck(TypeRelationId, targettypeid, GetUserId(), ACL_USAGE);
|
2011-12-19 23:05:19 +01:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
2012-06-15 21:55:03 +02:00
|
|
|
aclcheck_error_type(aclresult, targettypeid);
|
2011-12-19 23:05:19 +01:00
|
|
|
|
2012-04-24 15:20:53 +02:00
|
|
|
/* Domains are allowed for historical reasons, but we warn */
|
|
|
|
if (sourcetyptype == TYPTYPE_DOMAIN)
|
|
|
|
ereport(WARNING,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("cast will be ignored because the source data type is a domain")));
|
|
|
|
|
|
|
|
else if (targettyptype == TYPTYPE_DOMAIN)
|
|
|
|
ereport(WARNING,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("cast will be ignored because the target data type is a domain")));
|
|
|
|
|
2017-02-06 10:33:58 +01:00
|
|
|
/* Determine the cast method */
|
2002-07-19 01:11:32 +02:00
|
|
|
if (stmt->func != NULL)
|
2008-10-31 09:39:22 +01:00
|
|
|
castmethod = COERCION_METHOD_FUNCTION;
|
|
|
|
else if (stmt->inout)
|
|
|
|
castmethod = COERCION_METHOD_INOUT;
|
|
|
|
else
|
|
|
|
castmethod = COERCION_METHOD_BINARY;
|
|
|
|
|
|
|
|
if (castmethod == COERCION_METHOD_FUNCTION)
|
2002-07-19 01:11:32 +02:00
|
|
|
{
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
Form_pg_proc procstruct;
|
|
|
|
|
2017-11-30 14:46:13 +01:00
|
|
|
funcid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->func, false);
|
2002-07-19 01:11:32 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcid));
|
2002-07-19 01:11:32 +02:00
|
|
|
if (!HeapTupleIsValid(tuple))
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "cache lookup failed for function %u", funcid);
|
2002-07-19 01:11:32 +02:00
|
|
|
|
|
|
|
procstruct = (Form_pg_proc) GETSTRUCT(tuple);
|
2004-06-16 03:27:00 +02:00
|
|
|
nargs = procstruct->pronargs;
|
|
|
|
if (nargs < 1 || nargs > 3)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2004-06-16 03:27:00 +02:00
|
|
|
errmsg("cast function must take one to three arguments")));
|
Record dependencies of a cast on other casts that it requires.
When creating a cast that uses a conversion function, we've
historically allowed the input and result types to be
binary-compatible with the function's input and result types,
rather than necessarily being identical. This means that the new
cast is logically dependent on the binary-compatible cast or casts
that it references: if those are defined by pg_cast entries, and you
try to restore the new cast without having defined them, it'll fail.
Hence, we should make pg_depend entries to record these dependencies
so that pg_dump knows that there is an ordering requirement.
This is not the only place where we allow such shortcuts; aggregate
functions for example are similarly lax, and in principle should gain
similar dependencies. However, for now it seems sufficient to fix
the cast-versus-cast case, as pg_dump's other ordering heuristics
should keep it out of trouble for other object types.
Per report from David Turoň; thanks also to Robert Haas for
preliminary investigation. I considered back-patching, but
seeing that this issue has existed for many years without
previous reports, it's not clear it's worth the trouble.
Moreover, back-patching wouldn't be enough to ensure that the
new pg_depend entries exist in existing databases anyway.
Discussion: https://postgr.es/m/OF0A160F3E.578B15D1-ONC12588DA.003E4857-C12588DA.0045A428@notes.linuxbox.cz
2022-10-17 20:02:05 +02:00
|
|
|
if (!IsBinaryCoercibleWithCast(sourcetypeid,
|
|
|
|
procstruct->proargtypes.values[0],
|
|
|
|
&incastid))
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2008-07-12 12:44:56 +02:00
|
|
|
errmsg("argument of cast function must match or be binary-coercible from source data type")));
|
2005-03-29 02:17:27 +02:00
|
|
|
if (nargs > 1 && procstruct->proargtypes.values[1] != INT4OID)
|
2004-06-16 03:27:00 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2017-01-18 20:08:20 +01:00
|
|
|
errmsg("second argument of cast function must be type %s",
|
|
|
|
"integer")));
|
2005-03-29 02:17:27 +02:00
|
|
|
if (nargs > 2 && procstruct->proargtypes.values[2] != BOOLOID)
|
2004-06-16 03:27:00 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2017-01-18 20:08:20 +01:00
|
|
|
errmsg("third argument of cast function must be type %s",
|
|
|
|
"boolean")));
|
Record dependencies of a cast on other casts that it requires.
When creating a cast that uses a conversion function, we've
historically allowed the input and result types to be
binary-compatible with the function's input and result types,
rather than necessarily being identical. This means that the new
cast is logically dependent on the binary-compatible cast or casts
that it references: if those are defined by pg_cast entries, and you
try to restore the new cast without having defined them, it'll fail.
Hence, we should make pg_depend entries to record these dependencies
so that pg_dump knows that there is an ordering requirement.
This is not the only place where we allow such shortcuts; aggregate
functions for example are similarly lax, and in principle should gain
similar dependencies. However, for now it seems sufficient to fix
the cast-versus-cast case, as pg_dump's other ordering heuristics
should keep it out of trouble for other object types.
Per report from David Turoň; thanks also to Robert Haas for
preliminary investigation. I considered back-patching, but
seeing that this issue has existed for many years without
previous reports, it's not clear it's worth the trouble.
Moreover, back-patching wouldn't be enough to ensure that the
new pg_depend entries exist in existing databases anyway.
Discussion: https://postgr.es/m/OF0A160F3E.578B15D1-ONC12588DA.003E4857-C12588DA.0045A428@notes.linuxbox.cz
2022-10-17 20:02:05 +02:00
|
|
|
if (!IsBinaryCoercibleWithCast(procstruct->prorettype,
|
|
|
|
targettypeid,
|
|
|
|
&outcastid))
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2008-07-12 12:44:56 +02:00
|
|
|
errmsg("return data type of cast function must match or be binary-coercible to target data type")));
|
2003-08-04 02:43:34 +02:00
|
|
|
|
2003-02-01 23:09:26 +01:00
|
|
|
/*
|
|
|
|
* Restricting the volatility of a cast function may or may not be a
|
|
|
|
* good idea in the abstract, but it definitely breaks many old
|
|
|
|
* user-defined types. Disable this check --- tgl 2/1/03
|
|
|
|
*/
|
|
|
|
#ifdef NOT_USED
|
2002-09-15 15:04:16 +02:00
|
|
|
if (procstruct->provolatile == PROVOLATILE_VOLATILE)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("cast function must not be volatile")));
|
2003-02-01 23:09:26 +01:00
|
|
|
#endif
|
2018-03-02 14:57:38 +01:00
|
|
|
if (procstruct->prokind != PROKIND_FUNCTION)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2018-03-02 14:57:38 +01:00
|
|
|
errmsg("cast function must be a normal function")));
|
2002-07-19 01:11:32 +02:00
|
|
|
if (procstruct->proretset)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("cast function must not return a set")));
|
2002-07-19 01:11:32 +02:00
|
|
|
|
|
|
|
ReleaseSysCache(tuple);
|
|
|
|
}
|
|
|
|
else
|
2008-10-31 09:39:22 +01:00
|
|
|
{
|
|
|
|
funcid = InvalidOid;
|
|
|
|
nargs = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (castmethod == COERCION_METHOD_BINARY)
|
2002-07-19 01:11:32 +02:00
|
|
|
{
|
2002-10-05 00:08:44 +02:00
|
|
|
int16 typ1len;
|
|
|
|
int16 typ2len;
|
|
|
|
bool typ1byval;
|
|
|
|
bool typ2byval;
|
|
|
|
char typ1align;
|
|
|
|
char typ2align;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Must be superuser to create binary-compatible casts, since
|
|
|
|
* erroneous casts can easily crash the backend.
|
|
|
|
*/
|
|
|
|
if (!superuser())
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
|
|
|
errmsg("must be superuser to create a cast WITHOUT FUNCTION")));
|
2002-10-05 00:08:44 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Also, insist that the types match as to size, alignment, and
|
|
|
|
* pass-by-value attributes; this provides at least a crude check that
|
|
|
|
* they have similar representations. A pair of types that fail this
|
|
|
|
* test should certainly not be equated.
|
|
|
|
*/
|
|
|
|
get_typlenbyvalalign(sourcetypeid, &typ1len, &typ1byval, &typ1align);
|
|
|
|
get_typlenbyvalalign(targettypeid, &typ2len, &typ2byval, &typ2align);
|
|
|
|
if (typ1len != typ2len ||
|
|
|
|
typ1byval != typ2byval ||
|
|
|
|
typ1align != typ2align)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2003-09-29 02:05:25 +02:00
|
|
|
errmsg("source and target data types are not physically compatible")));
|
2009-03-04 12:53:53 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We know that composite, enum and array types are never binary-
|
|
|
|
* compatible with each other. They all have OIDs embedded in them.
|
|
|
|
*
|
|
|
|
* Theoretically you could build a user-defined base type that is
|
|
|
|
* binary-compatible with a composite, enum, or array type. But we
|
|
|
|
* disallow that too, as in practice such a cast is surely a mistake.
|
|
|
|
* You can always work around that by writing a cast function.
|
|
|
|
*/
|
|
|
|
if (sourcetyptype == TYPTYPE_COMPOSITE ||
|
|
|
|
targettyptype == TYPTYPE_COMPOSITE)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("composite data types are not binary-compatible")));
|
|
|
|
|
|
|
|
if (sourcetyptype == TYPTYPE_ENUM ||
|
|
|
|
targettyptype == TYPTYPE_ENUM)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("enum data types are not binary-compatible")));
|
|
|
|
|
|
|
|
if (OidIsValid(get_element_type(sourcetypeid)) ||
|
|
|
|
OidIsValid(get_element_type(targettypeid)))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("array data types are not binary-compatible")));
|
Improve handling of domains over arrays.
This patch eliminates various bizarre behaviors caused by sloppy thinking
about the difference between a domain type and its underlying array type.
In particular, the operation of updating one element of such an array
has to be considered as yielding a value of the underlying array type,
*not* a value of the domain, because there's no assurance that the
domain's CHECK constraints are still satisfied. If we're intending to
store the result back into a domain column, we have to re-cast to the
domain type so that constraints are re-checked.
For similar reasons, such a domain can't be blindly matched to an ANYARRAY
polymorphic parameter, because the polymorphic function is likely to apply
array-ish operations that could invalidate the domain constraints. For the
moment, we just forbid such matching. We might later wish to insert an
automatic downcast to the underlying array type, but such a change should
also change matching of domains to ANYELEMENT for consistency.
To ensure that all such logic is rechecked, this patch removes the original
hack of setting a domain's pg_type.typelem field to match its base type;
the typelem will always be zero instead. In those places where it's really
okay to look through the domain type with no other logic changes, use the
newly added get_base_element_type function in place of get_element_type.
catversion bumped due to change in pg_type contents.
Per bug #5717 from Richard Huxton and subsequent discussion.
2010-10-21 22:07:17 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We also disallow creating binary-compatibility casts involving
|
|
|
|
* domains. Casting from a domain to its base type is already
|
|
|
|
* allowed, and casting the other way ought to go through domain
|
|
|
|
* coercion to permit constraint checking. Again, if you're intent on
|
|
|
|
* having your own semantics for that, create a no-op cast function.
|
|
|
|
*
|
|
|
|
* NOTE: if we were to relax this, the above checks for composites
|
|
|
|
* etc. would have to be modified to look through domains to their
|
|
|
|
* base types.
|
|
|
|
*/
|
|
|
|
if (sourcetyptype == TYPTYPE_DOMAIN ||
|
|
|
|
targettyptype == TYPTYPE_DOMAIN)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("domain data types must not be marked binary-compatible")));
|
2002-07-19 01:11:32 +02:00
|
|
|
}
|
|
|
|
|
2004-06-16 03:27:00 +02:00
|
|
|
/*
|
|
|
|
* Allow source and target types to be same only for length coercion
|
|
|
|
* functions. We assume a multi-arg function does length coercion.
|
|
|
|
*/
|
|
|
|
if (sourcetypeid == targettypeid && nargs < 2)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("source data type and target data type are the same")));
|
|
|
|
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
/* convert CoercionContext enum to char value for castcontext */
|
|
|
|
switch (stmt->context)
|
|
|
|
{
|
|
|
|
case COERCION_IMPLICIT:
|
|
|
|
castcontext = COERCION_CODE_IMPLICIT;
|
|
|
|
break;
|
|
|
|
case COERCION_ASSIGNMENT:
|
|
|
|
castcontext = COERCION_CODE_ASSIGNMENT;
|
|
|
|
break;
|
2021-01-04 17:52:00 +01:00
|
|
|
/* COERCION_PLPGSQL is intentionally not covered here */
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
case COERCION_EXPLICIT:
|
|
|
|
castcontext = COERCION_CODE_EXPLICIT;
|
|
|
|
break;
|
|
|
|
default:
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "unrecognized CoercionContext: %d", stmt->context);
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
castcontext = 0; /* keep compiler quiet */
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
Record dependencies of a cast on other casts that it requires.
When creating a cast that uses a conversion function, we've
historically allowed the input and result types to be
binary-compatible with the function's input and result types,
rather than necessarily being identical. This means that the new
cast is logically dependent on the binary-compatible cast or casts
that it references: if those are defined by pg_cast entries, and you
try to restore the new cast without having defined them, it'll fail.
Hence, we should make pg_depend entries to record these dependencies
so that pg_dump knows that there is an ordering requirement.
This is not the only place where we allow such shortcuts; aggregate
functions for example are similarly lax, and in principle should gain
similar dependencies. However, for now it seems sufficient to fix
the cast-versus-cast case, as pg_dump's other ordering heuristics
should keep it out of trouble for other object types.
Per report from David Turoň; thanks also to Robert Haas for
preliminary investigation. I considered back-patching, but
seeing that this issue has existed for many years without
previous reports, it's not clear it's worth the trouble.
Moreover, back-patching wouldn't be enough to ensure that the
new pg_depend entries exist in existing databases anyway.
Discussion: https://postgr.es/m/OF0A160F3E.578B15D1-ONC12588DA.003E4857-C12588DA.0045A428@notes.linuxbox.cz
2022-10-17 20:02:05 +02:00
|
|
|
myself = CastCreate(sourcetypeid, targettypeid, funcid, incastid, outcastid,
|
|
|
|
castcontext, castmethod, DEPENDENCY_NORMAL);
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return myself;
|
2002-07-19 01:11:32 +02:00
|
|
|
}
|
|
|
|
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
static void
|
|
|
|
check_transform_function(Form_pg_proc procstruct)
|
|
|
|
{
|
|
|
|
if (procstruct->provolatile == PROVOLATILE_VOLATILE)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("transform function must not be volatile")));
|
2018-03-02 14:57:38 +01:00
|
|
|
if (procstruct->prokind != PROKIND_FUNCTION)
|
2015-04-26 16:33:14 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2018-03-02 14:57:38 +01:00
|
|
|
errmsg("transform function must be a normal function")));
|
2015-04-26 16:33:14 +02:00
|
|
|
if (procstruct->proretset)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("transform function must not return a set")));
|
|
|
|
if (procstruct->pronargs != 1)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("transform function must take one argument")));
|
|
|
|
if (procstruct->proargtypes.values[0] != INTERNALOID)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2017-01-18 20:08:20 +01:00
|
|
|
errmsg("first argument of transform function must be type %s",
|
|
|
|
"internal")));
|
2015-04-26 16:33:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* CREATE TRANSFORM
|
|
|
|
*/
|
2015-06-26 23:17:54 +02:00
|
|
|
ObjectAddress
|
2015-04-26 16:33:14 +02:00
|
|
|
CreateTransform(CreateTransformStmt *stmt)
|
|
|
|
{
|
|
|
|
Oid typeid;
|
|
|
|
char typtype;
|
|
|
|
Oid langid;
|
|
|
|
Oid fromsqlfuncid;
|
|
|
|
Oid tosqlfuncid;
|
|
|
|
AclResult aclresult;
|
|
|
|
Form_pg_proc procstruct;
|
|
|
|
Datum values[Natts_pg_transform];
|
2022-07-16 08:42:15 +02:00
|
|
|
bool nulls[Natts_pg_transform] = {0};
|
|
|
|
bool replaces[Natts_pg_transform] = {0};
|
2015-04-26 16:33:14 +02:00
|
|
|
Oid transformid;
|
|
|
|
HeapTuple tuple;
|
|
|
|
HeapTuple newtuple;
|
|
|
|
Relation relation;
|
|
|
|
ObjectAddress myself,
|
|
|
|
referenced;
|
2020-09-05 14:33:53 +02:00
|
|
|
ObjectAddresses *addrs;
|
2015-04-26 16:33:14 +02:00
|
|
|
bool is_replace;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get the type
|
|
|
|
*/
|
|
|
|
typeid = typenameTypeId(NULL, stmt->type_name);
|
|
|
|
typtype = get_typtype(typeid);
|
|
|
|
|
|
|
|
if (typtype == TYPTYPE_PSEUDO)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("data type %s is a pseudo-type",
|
|
|
|
TypeNameToString(stmt->type_name))));
|
|
|
|
|
|
|
|
if (typtype == TYPTYPE_DOMAIN)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("data type %s is a domain",
|
|
|
|
TypeNameToString(stmt->type_name))));
|
|
|
|
|
2022-11-13 08:11:17 +01:00
|
|
|
if (!object_ownercheck(TypeRelationId, typeid, GetUserId()))
|
2015-04-26 16:33:14 +02:00
|
|
|
aclcheck_error_type(ACLCHECK_NOT_OWNER, typeid);
|
|
|
|
|
2022-11-13 08:11:17 +01:00
|
|
|
aclresult = object_aclcheck(TypeRelationId, typeid, GetUserId(), ACL_USAGE);
|
2015-04-26 16:33:14 +02:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
|
|
|
aclcheck_error_type(aclresult, typeid);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get the language
|
|
|
|
*/
|
|
|
|
langid = get_language_oid(stmt->lang, false);
|
|
|
|
|
2022-11-13 08:11:17 +01:00
|
|
|
aclresult = object_aclcheck(LanguageRelationId, langid, GetUserId(), ACL_USAGE);
|
2015-04-26 16:33:14 +02:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_LANGUAGE, stmt->lang);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Get the functions
|
|
|
|
*/
|
|
|
|
if (stmt->fromsql)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
fromsqlfuncid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->fromsql, false);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
2022-11-13 08:11:17 +01:00
|
|
|
if (!object_ownercheck(ProcedureRelationId, fromsqlfuncid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(stmt->fromsql->objname));
|
2015-04-26 16:33:14 +02:00
|
|
|
|
2022-11-13 08:11:17 +01:00
|
|
|
aclresult = object_aclcheck(ProcedureRelationId, fromsqlfuncid, GetUserId(), ACL_EXECUTE);
|
2015-04-26 16:33:14 +02:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_FUNCTION, NameListToString(stmt->fromsql->objname));
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(fromsqlfuncid));
|
|
|
|
if (!HeapTupleIsValid(tuple))
|
|
|
|
elog(ERROR, "cache lookup failed for function %u", fromsqlfuncid);
|
|
|
|
procstruct = (Form_pg_proc) GETSTRUCT(tuple);
|
|
|
|
if (procstruct->prorettype != INTERNALOID)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2017-01-18 20:08:20 +01:00
|
|
|
errmsg("return data type of FROM SQL function must be %s",
|
|
|
|
"internal")));
|
2015-04-26 16:33:14 +02:00
|
|
|
check_transform_function(procstruct);
|
|
|
|
ReleaseSysCache(tuple);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
fromsqlfuncid = InvalidOid;
|
|
|
|
|
|
|
|
if (stmt->tosql)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
tosqlfuncid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->tosql, false);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
2022-11-13 08:11:17 +01:00
|
|
|
if (!object_ownercheck(ProcedureRelationId, tosqlfuncid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(stmt->tosql->objname));
|
2015-04-26 16:33:14 +02:00
|
|
|
|
2022-11-13 08:11:17 +01:00
|
|
|
aclresult = object_aclcheck(ProcedureRelationId, tosqlfuncid, GetUserId(), ACL_EXECUTE);
|
2015-04-26 16:33:14 +02:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_FUNCTION, NameListToString(stmt->tosql->objname));
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(tosqlfuncid));
|
|
|
|
if (!HeapTupleIsValid(tuple))
|
|
|
|
elog(ERROR, "cache lookup failed for function %u", tosqlfuncid);
|
|
|
|
procstruct = (Form_pg_proc) GETSTRUCT(tuple);
|
|
|
|
if (procstruct->prorettype != typeid)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("return data type of TO SQL function must be the transform data type")));
|
|
|
|
check_transform_function(procstruct);
|
|
|
|
ReleaseSysCache(tuple);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
tosqlfuncid = InvalidOid;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ready to go
|
|
|
|
*/
|
|
|
|
values[Anum_pg_transform_trftype - 1] = ObjectIdGetDatum(typeid);
|
|
|
|
values[Anum_pg_transform_trflang - 1] = ObjectIdGetDatum(langid);
|
|
|
|
values[Anum_pg_transform_trffromsql - 1] = ObjectIdGetDatum(fromsqlfuncid);
|
|
|
|
values[Anum_pg_transform_trftosql - 1] = ObjectIdGetDatum(tosqlfuncid);
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
relation = table_open(TransformRelationId, RowExclusiveLock);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
tuple = SearchSysCache2(TRFTYPELANG,
|
|
|
|
ObjectIdGetDatum(typeid),
|
|
|
|
ObjectIdGetDatum(langid));
|
|
|
|
if (HeapTupleIsValid(tuple))
|
|
|
|
{
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
Form_pg_transform form = (Form_pg_transform) GETSTRUCT(tuple);
|
|
|
|
|
2015-04-26 16:33:14 +02:00
|
|
|
if (!stmt->replace)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
2015-05-19 04:55:14 +02:00
|
|
|
errmsg("transform for type %s language \"%s\" already exists",
|
2015-04-26 16:33:14 +02:00
|
|
|
format_type_be(typeid),
|
|
|
|
stmt->lang)));
|
|
|
|
|
|
|
|
replaces[Anum_pg_transform_trffromsql - 1] = true;
|
|
|
|
replaces[Anum_pg_transform_trftosql - 1] = true;
|
|
|
|
|
|
|
|
newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values, nulls, replaces);
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(relation, &newtuple->t_self, newtuple);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
transformid = form->oid;
|
2015-04-26 16:33:14 +02:00
|
|
|
ReleaseSysCache(tuple);
|
|
|
|
is_replace = true;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
transformid = GetNewOidWithIndex(relation, TransformOidIndexId,
|
|
|
|
Anum_pg_transform_oid);
|
|
|
|
values[Anum_pg_transform_oid - 1] = ObjectIdGetDatum(transformid);
|
2015-04-26 16:33:14 +02:00
|
|
|
newtuple = heap_form_tuple(RelationGetDescr(relation), values, nulls);
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
CatalogTupleInsert(relation, newtuple);
|
2015-04-26 16:33:14 +02:00
|
|
|
is_replace = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (is_replace)
|
|
|
|
deleteDependencyRecordsFor(TransformRelationId, transformid, true);
|
|
|
|
|
2020-09-05 14:33:53 +02:00
|
|
|
addrs = new_object_addresses();
|
|
|
|
|
2015-04-26 16:33:14 +02:00
|
|
|
/* make dependency entries */
|
2020-09-05 14:33:53 +02:00
|
|
|
ObjectAddressSet(myself, TransformRelationId, transformid);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
/* dependency on language */
|
2020-09-05 14:33:53 +02:00
|
|
|
ObjectAddressSet(referenced, LanguageRelationId, langid);
|
|
|
|
add_exact_object_address(&referenced, addrs);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
/* dependency on type */
|
2020-09-05 14:33:53 +02:00
|
|
|
ObjectAddressSet(referenced, TypeRelationId, typeid);
|
|
|
|
add_exact_object_address(&referenced, addrs);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
/* dependencies on functions */
|
|
|
|
if (OidIsValid(fromsqlfuncid))
|
|
|
|
{
|
2020-09-05 14:33:53 +02:00
|
|
|
ObjectAddressSet(referenced, ProcedureRelationId, fromsqlfuncid);
|
|
|
|
add_exact_object_address(&referenced, addrs);
|
2015-04-26 16:33:14 +02:00
|
|
|
}
|
|
|
|
if (OidIsValid(tosqlfuncid))
|
|
|
|
{
|
2020-09-05 14:33:53 +02:00
|
|
|
ObjectAddressSet(referenced, ProcedureRelationId, tosqlfuncid);
|
|
|
|
add_exact_object_address(&referenced, addrs);
|
2015-04-26 16:33:14 +02:00
|
|
|
}
|
|
|
|
|
2020-09-05 14:33:53 +02:00
|
|
|
record_object_address_dependencies(&myself, addrs, DEPENDENCY_NORMAL);
|
|
|
|
free_object_addresses(addrs);
|
|
|
|
|
2015-04-26 16:33:14 +02:00
|
|
|
/* dependency on extension */
|
|
|
|
recordDependencyOnCurrentExtension(&myself, is_replace);
|
|
|
|
|
|
|
|
/* Post creation hook for new transform */
|
|
|
|
InvokeObjectPostCreateHook(TransformRelationId, transformid, 0);
|
|
|
|
|
|
|
|
heap_freetuple(newtuple);
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(relation, RowExclusiveLock);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
2015-06-26 23:17:54 +02:00
|
|
|
return myself;
|
2015-04-26 16:33:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* get_transform_oid - given type OID and language OID, look up a transform OID
|
|
|
|
*
|
|
|
|
* If missing_ok is false, throw an error if the transform is not found. If
|
|
|
|
* true, just return InvalidOid.
|
|
|
|
*/
|
|
|
|
Oid
|
|
|
|
get_transform_oid(Oid type_id, Oid lang_id, bool missing_ok)
|
|
|
|
{
|
|
|
|
Oid oid;
|
|
|
|
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
oid = GetSysCacheOid2(TRFTYPELANG, Anum_pg_transform_oid,
|
2015-04-26 16:33:14 +02:00
|
|
|
ObjectIdGetDatum(type_id),
|
|
|
|
ObjectIdGetDatum(lang_id));
|
|
|
|
if (!OidIsValid(oid) && !missing_ok)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
|
|
|
errmsg("transform for type %s language \"%s\" does not exist",
|
|
|
|
format_type_be(type_id),
|
|
|
|
get_language_name(lang_id, false))));
|
|
|
|
return oid;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2005-08-01 06:03:59 +02:00
|
|
|
/*
|
2013-01-21 16:06:41 +01:00
|
|
|
* Subroutine for ALTER FUNCTION/AGGREGATE SET SCHEMA/RENAME
|
2006-04-15 19:45:46 +02:00
|
|
|
*
|
Rework order of checks in ALTER / SET SCHEMA
When attempting to move an object into the schema in which it already
was, for most objects classes we were correctly complaining about
exactly that ("object is already in schema"); but for some other object
classes, such as functions, we were instead complaining of a name
collision ("object already exists in schema"). The latter is wrong and
misleading, per complaint from Robert Haas in
CA+TgmoZ0+gNf7RDKRc3u5rHXffP=QjqPZKGxb4BsPz65k7qnHQ@mail.gmail.com
To fix, refactor the way these checks are done. As a bonus, the
resulting code is smaller and can also share some code with Rename
cases.
While at it, remove use of getObjectDescriptionOids() in error messages.
These are normally disallowed because of translatability considerations,
but this one had slipped through since 9.1. (Not sure that this is
worth backpatching, though, as it would create some untranslated
messages in back branches.)
This is loosely based on a patch by KaiGai Kohei, heavily reworked by
me.
2013-01-15 17:23:43 +01:00
|
|
|
* Is there a function with the given name and signature already in the given
|
|
|
|
* namespace? If so, raise an appropriate error message.
|
2005-08-01 06:03:59 +02:00
|
|
|
*/
|
Rework order of checks in ALTER / SET SCHEMA
When attempting to move an object into the schema in which it already
was, for most objects classes we were correctly complaining about
exactly that ("object is already in schema"); but for some other object
classes, such as functions, we were instead complaining of a name
collision ("object already exists in schema"). The latter is wrong and
misleading, per complaint from Robert Haas in
CA+TgmoZ0+gNf7RDKRc3u5rHXffP=QjqPZKGxb4BsPz65k7qnHQ@mail.gmail.com
To fix, refactor the way these checks are done. As a bonus, the
resulting code is smaller and can also share some code with Rename
cases.
While at it, remove use of getObjectDescriptionOids() in error messages.
These are normally disallowed because of translatability considerations,
but this one had slipped through since 9.1. (Not sure that this is
worth backpatching, though, as it would create some untranslated
messages in back branches.)
This is loosely based on a patch by KaiGai Kohei, heavily reworked by
me.
2013-01-15 17:23:43 +01:00
|
|
|
void
|
|
|
|
IsThereFunctionInNamespace(const char *proname, int pronargs,
|
2013-06-13 01:50:37 +02:00
|
|
|
oidvector *proargtypes, Oid nspOid)
|
2011-02-08 22:08:41 +01:00
|
|
|
{
|
2005-08-01 06:03:59 +02:00
|
|
|
/* check for duplicate name (more friendly than unique-index failure) */
|
2010-02-14 19:42:19 +01:00
|
|
|
if (SearchSysCacheExists3(PROCNAMEARGSNSP,
|
Rework order of checks in ALTER / SET SCHEMA
When attempting to move an object into the schema in which it already
was, for most objects classes we were correctly complaining about
exactly that ("object is already in schema"); but for some other object
classes, such as functions, we were instead complaining of a name
collision ("object already exists in schema"). The latter is wrong and
misleading, per complaint from Robert Haas in
CA+TgmoZ0+gNf7RDKRc3u5rHXffP=QjqPZKGxb4BsPz65k7qnHQ@mail.gmail.com
To fix, refactor the way these checks are done. As a bonus, the
resulting code is smaller and can also share some code with Rename
cases.
While at it, remove use of getObjectDescriptionOids() in error messages.
These are normally disallowed because of translatability considerations,
but this one had slipped through since 9.1. (Not sure that this is
worth backpatching, though, as it would create some untranslated
messages in back branches.)
This is loosely based on a patch by KaiGai Kohei, heavily reworked by
me.
2013-01-15 17:23:43 +01:00
|
|
|
CStringGetDatum(proname),
|
2013-06-13 01:50:37 +02:00
|
|
|
PointerGetDatum(proargtypes),
|
2010-02-14 19:42:19 +01:00
|
|
|
ObjectIdGetDatum(nspOid)))
|
2005-08-01 06:03:59 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_FUNCTION),
|
Rework order of checks in ALTER / SET SCHEMA
When attempting to move an object into the schema in which it already
was, for most objects classes we were correctly complaining about
exactly that ("object is already in schema"); but for some other object
classes, such as functions, we were instead complaining of a name
collision ("object already exists in schema"). The latter is wrong and
misleading, per complaint from Robert Haas in
CA+TgmoZ0+gNf7RDKRc3u5rHXffP=QjqPZKGxb4BsPz65k7qnHQ@mail.gmail.com
To fix, refactor the way these checks are done. As a bonus, the
resulting code is smaller and can also share some code with Rename
cases.
While at it, remove use of getObjectDescriptionOids() in error messages.
These are normally disallowed because of translatability considerations,
but this one had slipped through since 9.1. (Not sure that this is
worth backpatching, though, as it would create some untranslated
messages in back branches.)
This is loosely based on a patch by KaiGai Kohei, heavily reworked by
me.
2013-01-15 17:23:43 +01:00
|
|
|
errmsg("function %s already exists in schema \"%s\"",
|
|
|
|
funcname_signature_string(proname, pronargs,
|
2013-06-13 01:50:37 +02:00
|
|
|
NIL, proargtypes->values),
|
2011-02-08 22:08:41 +01:00
|
|
|
get_namespace_name(nspOid))));
|
2005-08-01 06:03:59 +02:00
|
|
|
}
|
2009-09-23 01:43:43 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* ExecuteDoStmt
|
|
|
|
* Execute inline procedural-language code
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
*
|
|
|
|
* See at ExecuteCallStmt() about the atomic argument.
|
2009-09-23 01:43:43 +02:00
|
|
|
*/
|
|
|
|
void
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
ExecuteDoStmt(ParseState *pstate, DoStmt *stmt, bool atomic)
|
2009-09-23 01:43:43 +02:00
|
|
|
{
|
|
|
|
InlineCodeBlock *codeblock = makeNode(InlineCodeBlock);
|
|
|
|
ListCell *arg;
|
|
|
|
DefElem *as_item = NULL;
|
|
|
|
DefElem *language_item = NULL;
|
|
|
|
char *language;
|
|
|
|
Oid laninline;
|
|
|
|
HeapTuple languageTuple;
|
|
|
|
Form_pg_language languageStruct;
|
|
|
|
|
|
|
|
/* Process options we got from gram.y */
|
|
|
|
foreach(arg, stmt->args)
|
|
|
|
{
|
|
|
|
DefElem *defel = (DefElem *) lfirst(arg);
|
|
|
|
|
|
|
|
if (strcmp(defel->defname, "as") == 0)
|
|
|
|
{
|
|
|
|
if (as_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2009-09-23 01:43:43 +02:00
|
|
|
as_item = defel;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "language") == 0)
|
|
|
|
{
|
|
|
|
if (language_item)
|
Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 09:49:45 +02:00
|
|
|
errorConflictingDefElem(defel, pstate);
|
2009-09-23 01:43:43 +02:00
|
|
|
language_item = defel;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
elog(ERROR, "option \"%s\" not recognized",
|
|
|
|
defel->defname);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (as_item)
|
|
|
|
codeblock->source_text = strVal(as_item->arg);
|
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("no inline code specified")));
|
|
|
|
|
2010-01-26 17:33:40 +01:00
|
|
|
/* if LANGUAGE option wasn't specified, use the default */
|
2009-09-23 01:43:43 +02:00
|
|
|
if (language_item)
|
|
|
|
language = strVal(language_item->arg);
|
|
|
|
else
|
2010-01-26 17:33:40 +01:00
|
|
|
language = "plpgsql";
|
2009-09-23 01:43:43 +02:00
|
|
|
|
|
|
|
/* Look up the language and validate permissions */
|
2011-11-17 20:20:13 +01:00
|
|
|
languageTuple = SearchSysCache1(LANGNAME, PointerGetDatum(language));
|
2009-09-23 01:43:43 +02:00
|
|
|
if (!HeapTupleIsValid(languageTuple))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
2011-11-17 20:20:13 +01:00
|
|
|
errmsg("language \"%s\" does not exist", language),
|
Invent "trusted" extensions, and remove the pg_pltemplate catalog.
This patch creates a new extension property, "trusted". An extension
that's marked that way in its control file can be installed by a
non-superuser who has the CREATE privilege on the current database,
even if the extension contains objects that normally would have to be
created by a superuser. The objects within the extension will (by
default) be owned by the bootstrap superuser, but the extension itself
will be owned by the calling user. This allows replicating the old
behavior around trusted procedural languages, without all the
special-case logic in CREATE LANGUAGE. We have, however, chosen to
loosen the rules slightly: formerly, only a database owner could take
advantage of the special case that allowed installation of a trusted
language, but now anyone who has CREATE privilege can do so.
Having done that, we can delete the pg_pltemplate catalog, moving the
knowledge it contained into the extension script files for the various
PLs. This ends up being no change at all for the in-core PLs, but it is
a large step forward for external PLs: they can now have the same ease
of installation as core PLs do. The old "trusted PL" behavior was only
available to PLs that had entries in pg_pltemplate, but now any
extension can be marked trusted if appropriate.
This also removes one of the stumbling blocks for our Python 2 -> 3
migration, since the association of "plpythonu" with Python 2 is no
longer hard-wired into pg_pltemplate's initial contents. Exactly where
we go from here on that front remains to be settled, but one problem
is fixed.
Patch by me, reviewed by Peter Eisentraut, Stephen Frost, and others.
Discussion: https://postgr.es/m/5889.1566415762@sss.pgh.pa.us
2020-01-30 00:42:43 +01:00
|
|
|
(extension_file_exists(language) ?
|
2018-04-27 19:42:03 +02:00
|
|
|
errhint("Use CREATE EXTENSION to load the language into the database.") : 0)));
|
2009-09-23 01:43:43 +02:00
|
|
|
|
|
|
|
languageStruct = (Form_pg_language) GETSTRUCT(languageTuple);
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
codeblock->langOid = languageStruct->oid;
|
2009-11-06 22:57:57 +01:00
|
|
|
codeblock->langIsTrusted = languageStruct->lanpltrusted;
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
codeblock->atomic = atomic;
|
2009-09-23 01:43:43 +02:00
|
|
|
|
|
|
|
if (languageStruct->lanpltrusted)
|
|
|
|
{
|
|
|
|
/* if trusted language, need USAGE privilege */
|
|
|
|
AclResult aclresult;
|
|
|
|
|
2022-11-13 08:11:17 +01:00
|
|
|
aclresult = object_aclcheck(LanguageRelationId, codeblock->langOid, GetUserId(),
|
2023-05-19 23:24:48 +02:00
|
|
|
ACL_USAGE);
|
2009-09-23 01:43:43 +02:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_LANGUAGE,
|
2009-09-23 01:43:43 +02:00
|
|
|
NameStr(languageStruct->lanname));
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* if untrusted language, must be superuser */
|
|
|
|
if (!superuser())
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NO_PRIV, OBJECT_LANGUAGE,
|
2009-09-23 01:43:43 +02:00
|
|
|
NameStr(languageStruct->lanname));
|
|
|
|
}
|
|
|
|
|
|
|
|
/* get the handler function's OID */
|
|
|
|
laninline = languageStruct->laninline;
|
|
|
|
if (!OidIsValid(laninline))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("language \"%s\" does not support inline code execution",
|
|
|
|
NameStr(languageStruct->lanname))));
|
|
|
|
|
|
|
|
ReleaseSysCache(languageTuple);
|
|
|
|
|
|
|
|
/* execute the inline handler */
|
|
|
|
OidFunctionCall1(laninline, PointerGetDatum(codeblock));
|
|
|
|
}
|
2017-11-30 14:46:13 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Execute CALL statement
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
*
|
|
|
|
* Inside a top-level CALL statement, transaction-terminating commands such as
|
|
|
|
* COMMIT or a PL-specific equivalent are allowed. The terminology in the SQL
|
|
|
|
* standard is that CALL establishes a non-atomic execution context. Most
|
|
|
|
* other commands establish an atomic execution context, in which transaction
|
|
|
|
* control actions are not allowed. If there are nested executions of CALL,
|
|
|
|
* we want to track the execution context recursively, so that the nested
|
|
|
|
* CALLs can also do transaction control. Note, however, that for example in
|
|
|
|
* CALL -> SELECT -> CALL, the second call cannot do transaction control,
|
|
|
|
* because the SELECT in between establishes an atomic execution context.
|
|
|
|
*
|
|
|
|
* So when ExecuteCallStmt() is called from the top level, we pass in atomic =
|
|
|
|
* false (recall that that means transactions = yes). We then create a
|
|
|
|
* CallContext node with content atomic = false, which is passed in the
|
|
|
|
* fcinfo->context field to the procedure invocation. The language
|
|
|
|
* implementation should then take appropriate measures to allow or prevent
|
|
|
|
* transaction commands based on that information, e.g., call
|
|
|
|
* SPI_connect_ext(SPI_OPT_NONATOMIC). The language should also pass on the
|
|
|
|
* atomic flag to any nested invocations to CALL.
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
*
|
|
|
|
* The expression data structures and execution context that we create
|
|
|
|
* within this function are children of the portalContext of the Portal
|
|
|
|
* that the CALL utility statement runs in. Therefore, any pass-by-ref
|
|
|
|
* values that we're passing to the procedure will survive transaction
|
|
|
|
* commits that might occur inside the procedure.
|
2017-11-30 14:46:13 +01:00
|
|
|
*/
|
|
|
|
void
|
2018-03-14 16:47:21 +01:00
|
|
|
ExecuteCallStmt(CallStmt *stmt, ParamListInfo params, bool atomic, DestReceiver *dest)
|
2017-11-30 14:46:13 +01:00
|
|
|
{
|
Change function call information to be variable length.
Before this change FunctionCallInfoData, the struct arguments etc for
V1 function calls are stored in, always had space for
FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two
arrays. For nearly every function call 100 arguments is far more than
needed, therefore wasting memory. Arg and argnull being two separate
arrays also guarantees that to access a single argument, two
cachelines have to be touched.
Change the layout so there's a single variable-length array with pairs
of value / isnull. That drastically reduces memory consumption for
most function calls (on x86-64 a two argument function now uses
64bytes, previously 936 bytes), and makes it very likely that argument
value and its nullness are on the same cacheline.
Arguments are stored in a new NullableDatum struct, which, due to
padding, needs more memory per argument than before. But as usually
far fewer arguments are stored, and individual arguments are cheaper
to access, that's still a clear win. It's likely that there's other
places where conversion to NullableDatum arrays would make sense,
e.g. TupleTableSlots, but that's for another commit.
Because the function call information is now variable-length
allocations have to take the number of arguments into account. For
heap allocations that can be done with SizeForFunctionCallInfoData(),
for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro
that helps to allocate an appropriately sized and aligned variable.
Some places with stack allocation function call information don't know
the number of arguments at compile time, and currently variably sized
stack allocations aren't allowed in postgres. Therefore allow for
FUNC_MAX_ARGS space in these cases. They're not that common, so for
now that seems acceptable.
Because of the need to allocate FunctionCallInfo of the appropriate
size, older extensions may need to update their code. To avoid subtle
breakages, the FunctionCallInfoData struct has been renamed to
FunctionCallInfoBaseData. Most code only references FunctionCallInfo,
so that shouldn't cause much collateral damage.
This change is also a prerequisite for more efficient expression JIT
compilation (by allocating the function call information on the stack,
allowing LLVM to optimize it away); previously the size of the call
information caused problems inside LLVM's optimizer.
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
2019-01-26 23:17:52 +01:00
|
|
|
LOCAL_FCINFO(fcinfo, FUNC_MAX_ARGS);
|
2017-11-30 14:46:13 +01:00
|
|
|
ListCell *lc;
|
|
|
|
FuncExpr *fexpr;
|
|
|
|
int nargs;
|
|
|
|
int i;
|
2018-01-26 18:25:44 +01:00
|
|
|
AclResult aclresult;
|
2017-11-30 14:46:13 +01:00
|
|
|
FmgrInfo flinfo;
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
CallContext *callcontext;
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
EState *estate;
|
|
|
|
ExprContext *econtext;
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
HeapTuple tp;
|
2018-10-05 14:14:03 +02:00
|
|
|
PgStat_FunctionCallUsage fcusage;
|
2018-03-14 16:47:21 +01:00
|
|
|
Datum retval;
|
2017-11-30 14:46:13 +01:00
|
|
|
|
2018-02-21 00:03:31 +01:00
|
|
|
fexpr = stmt->funcexpr;
|
|
|
|
Assert(fexpr);
|
2018-11-04 20:50:55 +01:00
|
|
|
Assert(IsA(fexpr, FuncExpr));
|
2017-11-30 14:46:13 +01:00
|
|
|
|
2022-11-13 08:11:17 +01:00
|
|
|
aclresult = object_aclcheck(ProcedureRelationId, fexpr->funcid, GetUserId(), ACL_EXECUTE);
|
2017-11-30 14:46:13 +01:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_PROCEDURE, get_func_name(fexpr->funcid));
|
2017-11-30 14:46:13 +01:00
|
|
|
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
/* Prep the context object we'll pass to the procedure */
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
callcontext = makeNode(CallContext);
|
|
|
|
callcontext->atomic = atomic;
|
|
|
|
|
2018-04-13 23:06:28 +02:00
|
|
|
tp = SearchSysCache1(PROCOID, ObjectIdGetDatum(fexpr->funcid));
|
|
|
|
if (!HeapTupleIsValid(tp))
|
|
|
|
elog(ERROR, "cache lookup failed for function %u", fexpr->funcid);
|
|
|
|
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
/*
|
|
|
|
* If proconfig is set we can't allow transaction commands because of the
|
|
|
|
* way the GUC stacking works: The transaction boundary would have to pop
|
|
|
|
* the proconfig setting off the stack. That restriction could be lifted
|
|
|
|
* by redesigning the GUC nesting mechanism a bit.
|
|
|
|
*/
|
2018-03-28 02:13:52 +02:00
|
|
|
if (!heap_attisnull(tp, Anum_pg_proc_proconfig, NULL))
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
callcontext->atomic = true;
|
2018-04-13 23:06:28 +02:00
|
|
|
|
2018-07-04 09:26:19 +02:00
|
|
|
/*
|
|
|
|
* In security definer procedures, we can't allow transaction commands.
|
|
|
|
* StartTransaction() insists that the security context stack is empty,
|
|
|
|
* and AbortTransaction() resets the security context. This could be
|
|
|
|
* reorganized, but right now it doesn't work.
|
|
|
|
*/
|
2018-11-04 20:50:55 +01:00
|
|
|
if (((Form_pg_proc) GETSTRUCT(tp))->prosecdef)
|
2018-07-04 09:26:19 +02:00
|
|
|
callcontext->atomic = true;
|
|
|
|
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
ReleaseSysCache(tp);
|
|
|
|
|
2018-04-13 23:06:28 +02:00
|
|
|
/* safety check; see ExecInitFunc() */
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
nargs = list_length(fexpr->args);
|
2018-04-13 23:06:28 +02:00
|
|
|
if (nargs > FUNC_MAX_ARGS)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_TOO_MANY_ARGUMENTS),
|
|
|
|
errmsg_plural("cannot pass more than %d argument to a procedure",
|
|
|
|
"cannot pass more than %d arguments to a procedure",
|
|
|
|
FUNC_MAX_ARGS,
|
|
|
|
FUNC_MAX_ARGS)));
|
|
|
|
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
/* Initialize function call structure */
|
|
|
|
InvokeFunctionExecuteHook(fexpr->funcid);
|
2017-11-30 14:46:13 +01:00
|
|
|
fmgr_info(fexpr->funcid, &flinfo);
|
2018-07-06 22:27:42 +02:00
|
|
|
fmgr_info_set_expr((Node *) fexpr, &flinfo);
|
Change function call information to be variable length.
Before this change FunctionCallInfoData, the struct arguments etc for
V1 function calls are stored in, always had space for
FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two
arrays. For nearly every function call 100 arguments is far more than
needed, therefore wasting memory. Arg and argnull being two separate
arrays also guarantees that to access a single argument, two
cachelines have to be touched.
Change the layout so there's a single variable-length array with pairs
of value / isnull. That drastically reduces memory consumption for
most function calls (on x86-64 a two argument function now uses
64bytes, previously 936 bytes), and makes it very likely that argument
value and its nullness are on the same cacheline.
Arguments are stored in a new NullableDatum struct, which, due to
padding, needs more memory per argument than before. But as usually
far fewer arguments are stored, and individual arguments are cheaper
to access, that's still a clear win. It's likely that there's other
places where conversion to NullableDatum arrays would make sense,
e.g. TupleTableSlots, but that's for another commit.
Because the function call information is now variable-length
allocations have to take the number of arguments into account. For
heap allocations that can be done with SizeForFunctionCallInfoData(),
for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro
that helps to allocate an appropriately sized and aligned variable.
Some places with stack allocation function call information don't know
the number of arguments at compile time, and currently variably sized
stack allocations aren't allowed in postgres. Therefore allow for
FUNC_MAX_ARGS space in these cases. They're not that common, so for
now that seems acceptable.
Because of the need to allocate FunctionCallInfo of the appropriate
size, older extensions may need to update their code. To avoid subtle
breakages, the FunctionCallInfoData struct has been renamed to
FunctionCallInfoBaseData. Most code only references FunctionCallInfo,
so that shouldn't cause much collateral damage.
This change is also a prerequisite for more efficient expression JIT
compilation (by allocating the function call information on the stack,
allowing LLVM to optimize it away); previously the size of the call
information caused problems inside LLVM's optimizer.
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
2019-01-26 23:17:52 +01:00
|
|
|
InitFunctionCallInfoData(*fcinfo, &flinfo, nargs, fexpr->inputcollid,
|
|
|
|
(Node *) callcontext, NULL);
|
2017-11-30 14:46:13 +01:00
|
|
|
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
/*
|
|
|
|
* Evaluate procedure arguments inside a suitable execution context. Note
|
|
|
|
* we can't free this context till the procedure returns.
|
|
|
|
*/
|
|
|
|
estate = CreateExecutorState();
|
2018-02-21 00:03:31 +01:00
|
|
|
estate->es_param_list_info = params;
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
econtext = CreateExprContext(estate);
|
|
|
|
|
2021-09-22 01:06:33 +02:00
|
|
|
/*
|
|
|
|
* If we're called in non-atomic context, we also have to ensure that the
|
|
|
|
* argument expressions run with an up-to-date snapshot. Our caller will
|
|
|
|
* have provided a current snapshot in atomic contexts, but not in
|
|
|
|
* non-atomic contexts, because the possibility of a COMMIT/ROLLBACK
|
|
|
|
* destroying the snapshot makes higher-level management too complicated.
|
|
|
|
*/
|
|
|
|
if (!atomic)
|
|
|
|
PushActiveSnapshot(GetTransactionSnapshot());
|
|
|
|
|
2017-11-30 14:46:13 +01:00
|
|
|
i = 0;
|
2018-01-26 18:25:44 +01:00
|
|
|
foreach(lc, fexpr->args)
|
2017-11-30 14:46:13 +01:00
|
|
|
{
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
ExprState *exprstate;
|
|
|
|
Datum val;
|
|
|
|
bool isnull;
|
2017-11-30 14:46:13 +01:00
|
|
|
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
exprstate = ExecPrepareExpr(lfirst(lc), estate);
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
val = ExecEvalExprSwitchContext(exprstate, econtext, &isnull);
|
2017-11-30 14:46:13 +01:00
|
|
|
|
Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 23:11:36 +02:00
|
|
|
fcinfo->args[i].value = val;
|
|
|
|
fcinfo->args[i].isnull = isnull;
|
2017-11-30 14:46:13 +01:00
|
|
|
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
|
2021-09-22 01:06:33 +02:00
|
|
|
/* Get rid of temporary snapshot for arguments, if we made one */
|
|
|
|
if (!atomic)
|
|
|
|
PopActiveSnapshot();
|
|
|
|
|
|
|
|
/* Here we actually call the procedure */
|
Change function call information to be variable length.
Before this change FunctionCallInfoData, the struct arguments etc for
V1 function calls are stored in, always had space for
FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two
arrays. For nearly every function call 100 arguments is far more than
needed, therefore wasting memory. Arg and argnull being two separate
arrays also guarantees that to access a single argument, two
cachelines have to be touched.
Change the layout so there's a single variable-length array with pairs
of value / isnull. That drastically reduces memory consumption for
most function calls (on x86-64 a two argument function now uses
64bytes, previously 936 bytes), and makes it very likely that argument
value and its nullness are on the same cacheline.
Arguments are stored in a new NullableDatum struct, which, due to
padding, needs more memory per argument than before. But as usually
far fewer arguments are stored, and individual arguments are cheaper
to access, that's still a clear win. It's likely that there's other
places where conversion to NullableDatum arrays would make sense,
e.g. TupleTableSlots, but that's for another commit.
Because the function call information is now variable-length
allocations have to take the number of arguments into account. For
heap allocations that can be done with SizeForFunctionCallInfoData(),
for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro
that helps to allocate an appropriately sized and aligned variable.
Some places with stack allocation function call information don't know
the number of arguments at compile time, and currently variably sized
stack allocations aren't allowed in postgres. Therefore allow for
FUNC_MAX_ARGS space in these cases. They're not that common, so for
now that seems acceptable.
Because of the need to allocate FunctionCallInfo of the appropriate
size, older extensions may need to update their code. To avoid subtle
breakages, the FunctionCallInfoData struct has been renamed to
FunctionCallInfoBaseData. Most code only references FunctionCallInfo,
so that shouldn't cause much collateral damage.
This change is also a prerequisite for more efficient expression JIT
compilation (by allocating the function call information on the stack,
allowing LLVM to optimize it away); previously the size of the call
information caused problems inside LLVM's optimizer.
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
2019-01-26 23:17:52 +01:00
|
|
|
pgstat_init_function_usage(fcinfo, &fcusage);
|
|
|
|
retval = FunctionCallInvoke(fcinfo);
|
2018-10-05 14:14:03 +02:00
|
|
|
pgstat_end_function_usage(&fcusage, true);
|
2018-03-14 16:47:21 +01:00
|
|
|
|
2021-09-22 01:06:33 +02:00
|
|
|
/* Handle the procedure's outputs */
|
2018-03-14 16:47:21 +01:00
|
|
|
if (fexpr->funcresulttype == VOIDOID)
|
|
|
|
{
|
|
|
|
/* do nothing */
|
|
|
|
}
|
|
|
|
else if (fexpr->funcresulttype == RECORDOID)
|
|
|
|
{
|
2021-09-22 01:06:33 +02:00
|
|
|
/* send tuple to client */
|
2018-03-14 16:47:21 +01:00
|
|
|
HeapTupleHeader td;
|
|
|
|
Oid tupType;
|
|
|
|
int32 tupTypmod;
|
|
|
|
TupleDesc retdesc;
|
|
|
|
HeapTupleData rettupdata;
|
|
|
|
TupOutputState *tstate;
|
|
|
|
TupleTableSlot *slot;
|
|
|
|
|
Change function call information to be variable length.
Before this change FunctionCallInfoData, the struct arguments etc for
V1 function calls are stored in, always had space for
FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two
arrays. For nearly every function call 100 arguments is far more than
needed, therefore wasting memory. Arg and argnull being two separate
arrays also guarantees that to access a single argument, two
cachelines have to be touched.
Change the layout so there's a single variable-length array with pairs
of value / isnull. That drastically reduces memory consumption for
most function calls (on x86-64 a two argument function now uses
64bytes, previously 936 bytes), and makes it very likely that argument
value and its nullness are on the same cacheline.
Arguments are stored in a new NullableDatum struct, which, due to
padding, needs more memory per argument than before. But as usually
far fewer arguments are stored, and individual arguments are cheaper
to access, that's still a clear win. It's likely that there's other
places where conversion to NullableDatum arrays would make sense,
e.g. TupleTableSlots, but that's for another commit.
Because the function call information is now variable-length
allocations have to take the number of arguments into account. For
heap allocations that can be done with SizeForFunctionCallInfoData(),
for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro
that helps to allocate an appropriately sized and aligned variable.
Some places with stack allocation function call information don't know
the number of arguments at compile time, and currently variably sized
stack allocations aren't allowed in postgres. Therefore allow for
FUNC_MAX_ARGS space in these cases. They're not that common, so for
now that seems acceptable.
Because of the need to allocate FunctionCallInfo of the appropriate
size, older extensions may need to update their code. To avoid subtle
breakages, the FunctionCallInfoData struct has been renamed to
FunctionCallInfoBaseData. Most code only references FunctionCallInfo,
so that shouldn't cause much collateral damage.
This change is also a prerequisite for more efficient expression JIT
compilation (by allocating the function call information on the stack,
allowing LLVM to optimize it away); previously the size of the call
information caused problems inside LLVM's optimizer.
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
2019-01-26 23:17:52 +01:00
|
|
|
if (fcinfo->isnull)
|
2018-03-14 16:47:21 +01:00
|
|
|
elog(ERROR, "procedure returned null record");
|
|
|
|
|
Restore the portal-level snapshot after procedure COMMIT/ROLLBACK.
COMMIT/ROLLBACK necessarily destroys all snapshots within the session.
The original implementation of intra-procedure transactions just
cavalierly did that, ignoring the fact that this left us executing in
a rather different environment than normal. In particular, it turns
out that handling of toasted datums depends rather critically on there
being an outer ActiveSnapshot: otherwise, when SPI or the core
executor pop whatever snapshot they used and return, it's unsafe to
dereference any toasted datums that may appear in the query result.
It's possible to demonstrate "no known snapshots" and "missing chunk
number N for toast value" errors as a result of this oversight.
Historically this outer snapshot has been held by the Portal code,
and that seems like a good plan to preserve. So add infrastructure
to pquery.c to allow re-establishing the Portal-owned snapshot if it's
not there anymore, and add enough bookkeeping support that we can tell
whether it is or not.
We can't, however, just re-establish the Portal snapshot as part of
COMMIT/ROLLBACK. As in normal transaction start, acquiring the first
snapshot should wait until after SET and LOCK commands. Hence, teach
spi.c about doing this at the right time. (Note that this patch
doesn't fix the problem for any PLs that try to run intra-procedure
transactions without using SPI to execute SQL commands.)
This makes SPI's no_snapshots parameter rather a misnomer, so in HEAD,
rename that to allow_nonatomic.
replication/logical/worker.c also needs some fixes, because it wasn't
careful to hold a snapshot open around AFTER trigger execution.
That code doesn't use a Portal, which I suspect someday we're gonna
have to fix. But for now, just rearrange the order of operations.
This includes back-patching the recent addition of finish_estate()
to centralize the cleanup logic there.
This also back-patches commit 2ecfeda3e into v13, to improve the
test coverage for worker.c (it was that test that exposed that
worker.c's snapshot management is wrong).
Per bug #15990 from Andreas Wicht. Back-patch to v11 where
intra-procedure COMMIT was added.
Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
2021-05-21 20:03:53 +02:00
|
|
|
/*
|
|
|
|
* Ensure there's an active snapshot whilst we execute whatever's
|
|
|
|
* involved here. Note that this is *not* sufficient to make the
|
|
|
|
* world safe for TOAST pointers to be included in the returned data:
|
|
|
|
* the referenced data could have gone away while we didn't hold a
|
|
|
|
* snapshot. Hence, it's incumbent on PLs that can do COMMIT/ROLLBACK
|
|
|
|
* to not return TOAST pointers, unless those pointers were fetched
|
|
|
|
* after the last COMMIT/ROLLBACK in the procedure.
|
|
|
|
*
|
|
|
|
* XXX that is a really nasty, hard-to-test requirement. Is there a
|
|
|
|
* way to remove it?
|
|
|
|
*/
|
|
|
|
EnsurePortalSnapshotExists();
|
|
|
|
|
2018-03-14 16:47:21 +01:00
|
|
|
td = DatumGetHeapTupleHeader(retval);
|
|
|
|
tupType = HeapTupleHeaderGetTypeId(td);
|
|
|
|
tupTypmod = HeapTupleHeaderGetTypMod(td);
|
|
|
|
retdesc = lookup_rowtype_tupdesc(tupType, tupTypmod);
|
|
|
|
|
Introduce notion of different types of slots (without implementing them).
Upcoming work intends to allow pluggable ways to introduce new ways of
storing table data. Accessing those table access methods from the
executor requires TupleTableSlots to be carry tuples in the native
format of such storage methods; otherwise there'll be a significant
conversion overhead.
Different access methods will require different data to store tuples
efficiently (just like virtual, minimal, heap already require fields
in TupleTableSlot). To allow that without requiring additional pointer
indirections, we want to have different structs (embedding
TupleTableSlot) for different types of slots. Thus different types of
slots are needed, which requires adapting creators of slots.
The slot that most efficiently can represent a type of tuple in an
executor node will often depend on the type of slot a child node
uses. Therefore we need to track the type of slot is returned by
nodes, so parent slots can create slots based on that.
Relatedly, JIT compilation of tuple deforming needs to know which type
of slot a certain expression refers to, so it can create an
appropriate deforming function for the type of tuple in the slot.
But not all nodes will only return one type of slot, e.g. an append
node will potentially return different types of slots for each of its
subplans.
Therefore add function that allows to query the type of a node's
result slot, and whether it'll always be the same type (whether it's
fixed). This can be queried using ExecGetResultSlotOps().
The scan, result, inner, outer type of slots are automatically
inferred from ExecInitScanTupleSlot(), ExecInitResultSlot(),
left/right subtrees respectively. If that's not correct for a node,
that can be overwritten using new fields in PlanState.
This commit does not introduce the actually abstracted implementation
of different kind of TupleTableSlots, that will be left for a followup
commit. The different types of slots introduced will, for now, still
use the same backing implementation.
While this already partially invalidates the big comment in
tuptable.h, it seems to make more sense to update it later, when the
different TupleTableSlot implementations actually exist.
Author: Ashutosh Bapat and Andres Freund, with changes by Amit Khandekar
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-16 07:00:30 +01:00
|
|
|
tstate = begin_tup_output_tupdesc(dest, retdesc,
|
|
|
|
&TTSOpsHeapTuple);
|
2018-03-14 16:47:21 +01:00
|
|
|
|
|
|
|
rettupdata.t_len = HeapTupleHeaderGetDatumLength(td);
|
|
|
|
ItemPointerSetInvalid(&(rettupdata.t_self));
|
|
|
|
rettupdata.t_tableOid = InvalidOid;
|
|
|
|
rettupdata.t_data = td;
|
|
|
|
|
2018-09-26 01:27:48 +02:00
|
|
|
slot = ExecStoreHeapTuple(&rettupdata, tstate->slot, false);
|
2018-03-14 16:47:21 +01:00
|
|
|
tstate->dest->receiveSlot(slot, tstate->dest);
|
|
|
|
|
|
|
|
end_tup_output(tstate);
|
|
|
|
|
|
|
|
ReleaseTupleDesc(retdesc);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
elog(ERROR, "unexpected result type for procedure: %u",
|
|
|
|
fexpr->funcresulttype);
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
|
|
|
|
FreeExecutorState(estate);
|
2017-11-30 14:46:13 +01:00
|
|
|
}
|
2018-07-09 13:58:08 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Construct the tuple descriptor for a CALL statement return
|
|
|
|
*/
|
|
|
|
TupleDesc
|
|
|
|
CallStmtResultDesc(CallStmt *stmt)
|
|
|
|
{
|
|
|
|
FuncExpr *fexpr;
|
2018-11-04 20:50:55 +01:00
|
|
|
HeapTuple tuple;
|
|
|
|
TupleDesc tupdesc;
|
2018-07-09 13:58:08 +02:00
|
|
|
|
|
|
|
fexpr = stmt->funcexpr;
|
|
|
|
|
|
|
|
tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(fexpr->funcid));
|
|
|
|
if (!HeapTupleIsValid(tuple))
|
|
|
|
elog(ERROR, "cache lookup failed for procedure %u", fexpr->funcid);
|
|
|
|
|
|
|
|
tupdesc = build_function_result_tupdesc_t(tuple);
|
|
|
|
|
|
|
|
ReleaseSysCache(tuple);
|
|
|
|
|
|
|
|
return tupdesc;
|
|
|
|
}
|