2002-04-15 07:22:04 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* functioncmds.c
|
|
|
|
*
|
2003-11-21 23:32:49 +01:00
|
|
|
* Routines for CREATE and DROP FUNCTION commands and CREATE and DROP
|
2005-03-14 01:19:37 +01:00
|
|
|
* CAST commands.
|
2002-04-15 07:22:04 +02:00
|
|
|
*
|
2018-01-03 05:30:12 +01:00
|
|
|
* Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
|
2002-04-15 07:22:04 +02:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/backend/commands/functioncmds.c
|
2002-04-15 07:22:04 +02:00
|
|
|
*
|
|
|
|
* DESCRIPTION
|
|
|
|
* These routines take the parse tree and pick out the
|
|
|
|
* appropriate arguments/flags, and pass the results to the
|
|
|
|
* corresponding "FooDefine" routines (in src/catalog) that do
|
|
|
|
* the actual catalog-munging. These routines also verify permission
|
|
|
|
* of the user to execute the command.
|
|
|
|
*
|
|
|
|
* NOTES
|
|
|
|
* These things must be defined and committed in the following order:
|
|
|
|
* "create function":
|
|
|
|
* input/output, recv/send procedures
|
|
|
|
* "create type":
|
|
|
|
* type
|
|
|
|
* "create operator":
|
|
|
|
* operators
|
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#include "postgres.h"
|
|
|
|
|
2002-07-22 22:23:19 +02:00
|
|
|
#include "access/genam.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "access/heapam.h"
|
2012-08-30 22:15:44 +02:00
|
|
|
#include "access/htup_details.h"
|
2008-05-12 02:00:54 +02:00
|
|
|
#include "access/sysattr.h"
|
2002-07-12 20:43:19 +02:00
|
|
|
#include "catalog/dependency.h"
|
2002-07-19 01:11:32 +02:00
|
|
|
#include "catalog/indexing.h"
|
2010-11-25 17:48:49 +01:00
|
|
|
#include "catalog/objectaccess.h"
|
2005-04-14 22:03:27 +02:00
|
|
|
#include "catalog/pg_aggregate.h"
|
2002-07-19 01:11:32 +02:00
|
|
|
#include "catalog/pg_cast.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "catalog/pg_language.h"
|
2005-08-01 06:03:59 +02:00
|
|
|
#include "catalog/pg_namespace.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "catalog/pg_proc.h"
|
2015-04-26 16:33:14 +02:00
|
|
|
#include "catalog/pg_transform.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "catalog/pg_type.h"
|
2012-09-27 23:13:09 +02:00
|
|
|
#include "commands/alter.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "commands/defrem.h"
|
2005-09-08 22:07:42 +02:00
|
|
|
#include "commands/proclang.h"
|
2017-11-30 14:46:13 +01:00
|
|
|
#include "executor/execdesc.h"
|
|
|
|
#include "executor/executor.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "miscadmin.h"
|
2018-04-13 23:06:28 +02:00
|
|
|
#include "optimizer/clauses.h"
|
2008-12-18 19:20:35 +01:00
|
|
|
#include "optimizer/var.h"
|
2008-07-11 09:02:43 +02:00
|
|
|
#include "parser/parse_coerce.h"
|
2011-03-20 01:29:08 +01:00
|
|
|
#include "parser/parse_collate.h"
|
2008-12-04 18:51:28 +01:00
|
|
|
#include "parser/parse_expr.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "parser/parse_func.h"
|
|
|
|
#include "parser/parse_type.h"
|
|
|
|
#include "utils/acl.h"
|
2002-07-30 01:44:44 +02:00
|
|
|
#include "utils/builtins.h"
|
2002-07-19 01:11:32 +02:00
|
|
|
#include "utils/fmgroids.h"
|
2011-09-04 07:13:16 +02:00
|
|
|
#include "utils/guc.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "utils/lsyscache.h"
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
#include "utils/memutils.h"
|
2008-06-19 02:46:06 +02:00
|
|
|
#include "utils/rel.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "utils/syscache.h"
|
2018-03-14 16:47:21 +01:00
|
|
|
#include "utils/typcache.h"
|
2008-03-26 22:10:39 +01:00
|
|
|
#include "utils/tqual.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/*
|
2005-04-01 00:46:33 +02:00
|
|
|
* Examine the RETURNS clause of the CREATE FUNCTION statement
|
2002-04-15 07:22:04 +02:00
|
|
|
* and return information about it as *prorettype_p and *returnsSet.
|
|
|
|
*
|
|
|
|
* This is more complex than the average typename lookup because we want to
|
|
|
|
* allow a shell type to be used, or even created if the specified return type
|
|
|
|
* doesn't exist yet. (Without this, there's no way to define the I/O procs
|
|
|
|
* for a new type.) But SQL function creation won't cope, so error out if
|
2014-05-06 18:12:18 +02:00
|
|
|
* the target language is SQL. (We do this here, not in the SQL-function
|
2002-11-01 20:19:58 +01:00
|
|
|
* validator, so as not to produce a NOTICE and then an ERROR for the same
|
2002-08-22 02:01:51 +02:00
|
|
|
* condition.)
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
|
|
|
static void
|
|
|
|
compute_return_type(TypeName *returnType, Oid languageOid,
|
|
|
|
Oid *prorettype_p, bool *returnsSet_p)
|
|
|
|
{
|
2002-09-04 22:31:48 +02:00
|
|
|
Oid rettype;
|
2007-11-11 20:22:49 +01:00
|
|
|
Type typtup;
|
2011-12-19 23:05:19 +01:00
|
|
|
AclResult aclresult;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2014-01-23 18:40:29 +01:00
|
|
|
typtup = LookupTypeName(NULL, returnType, NULL, false);
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2007-11-11 20:22:49 +01:00
|
|
|
if (typtup)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2007-11-11 20:22:49 +01:00
|
|
|
if (!((Form_pg_type) GETSTRUCT(typtup))->typisdefined)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
|
|
|
if (languageOid == SQLlanguageId)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
2005-10-15 04:49:52 +02:00
|
|
|
errmsg("SQL function cannot return shell type %s",
|
|
|
|
TypeNameToString(returnType))));
|
2002-04-15 07:22:04 +02:00
|
|
|
else
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(NOTICE,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("return type %s is only a shell",
|
|
|
|
TypeNameToString(returnType))));
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
2007-11-11 20:22:49 +01:00
|
|
|
rettype = typeTypeId(typtup);
|
|
|
|
ReleaseSysCache(typtup);
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2002-09-04 22:31:48 +02:00
|
|
|
char *typnam = TypeNameToString(returnType);
|
2002-08-22 02:01:51 +02:00
|
|
|
Oid namespaceId;
|
|
|
|
AclResult aclresult;
|
|
|
|
char *typname;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2002-08-22 02:01:51 +02:00
|
|
|
/*
|
|
|
|
* Only C-coded functions can be I/O functions. We enforce this
|
|
|
|
* restriction here mainly to prevent littering the catalogs with
|
|
|
|
* shell types due to simple typos in user-defined function
|
|
|
|
* definitions.
|
|
|
|
*/
|
|
|
|
if (languageOid != INTERNALlanguageId &&
|
|
|
|
languageOid != ClanguageId)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
2003-07-28 02:09:16 +02:00
|
|
|
errmsg("type \"%s\" does not exist", typnam)));
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2007-11-11 20:22:49 +01:00
|
|
|
/* Reject if there's typmod decoration, too */
|
|
|
|
if (returnType->typmods != NIL)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("type modifier cannot be specified for shell type \"%s\"",
|
|
|
|
typnam)));
|
2007-11-11 20:22:49 +01:00
|
|
|
|
2002-08-22 02:01:51 +02:00
|
|
|
/* Otherwise, go ahead and make a shell type */
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(NOTICE,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
2003-07-28 02:09:16 +02:00
|
|
|
errmsg("type \"%s\" is not yet defined", typnam),
|
2003-07-19 01:20:33 +02:00
|
|
|
errdetail("Creating a shell type definition.")));
|
2002-08-22 02:01:51 +02:00
|
|
|
namespaceId = QualifiedNameGetCreationNamespace(returnType->names,
|
|
|
|
&typname);
|
|
|
|
aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(),
|
|
|
|
ACL_CREATE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_SCHEMA,
|
2003-08-01 02:15:26 +02:00
|
|
|
get_namespace_name(namespaceId));
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
address = TypeShellMake(typname, namespaceId, GetUserId());
|
|
|
|
rettype = address.objectId;
|
2003-07-19 01:20:33 +02:00
|
|
|
Assert(OidIsValid(rettype));
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
2011-12-19 23:05:19 +01:00
|
|
|
aclresult = pg_type_aclcheck(rettype, GetUserId(), ACL_USAGE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2012-06-15 21:55:03 +02:00
|
|
|
aclcheck_error_type(aclresult, rettype);
|
2011-12-19 23:05:19 +01:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
*prorettype_p = rettype;
|
|
|
|
*returnsSet_p = returnType->setof;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
* Interpret the function parameter list of a CREATE FUNCTION or
|
|
|
|
* CREATE AGGREGATE statement.
|
|
|
|
*
|
|
|
|
* Input parameters:
|
|
|
|
* parameters: list of FunctionParameter structs
|
|
|
|
* languageOid: OID of function language (InvalidOid if it's CREATE AGGREGATE)
|
|
|
|
* is_aggregate: needed only to determine error handling
|
2005-04-01 00:46:33 +02:00
|
|
|
*
|
|
|
|
* Results are stored into output parameters. parameterTypes must always
|
|
|
|
* be created, but the other arrays are set to NULL if not needed.
|
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
|
|
|
* variadicArgType is set to the variadic array type if there's a VARIADIC
|
|
|
|
* parameter (there can be only one); or to InvalidOid if not.
|
2005-04-01 00:46:33 +02:00
|
|
|
* requiredResultType is set to InvalidOid if there are no OUT parameters,
|
|
|
|
* else it is set to the OID of the implied result type.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
void
|
2016-09-06 18:00:00 +02:00
|
|
|
interpret_function_parameter_list(ParseState *pstate,
|
|
|
|
List *parameters,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
Oid languageOid,
|
2017-11-30 14:46:13 +01:00
|
|
|
ObjectType objtype,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
oidvector **parameterTypes,
|
|
|
|
ArrayType **allParameterTypes,
|
|
|
|
ArrayType **parameterModes,
|
|
|
|
ArrayType **parameterNames,
|
|
|
|
List **parameterDefaults,
|
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
|
|
|
Oid *variadicArgType,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
Oid *requiredResultType)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2005-04-01 00:46:33 +02:00
|
|
|
int parameterCount = list_length(parameters);
|
|
|
|
Oid *inTypes;
|
|
|
|
int inCount = 0;
|
|
|
|
Datum *allTypes;
|
|
|
|
Datum *paramModes;
|
|
|
|
Datum *paramNames;
|
|
|
|
int outCount = 0;
|
2008-07-16 03:30:23 +02:00
|
|
|
int varCount = 0;
|
2005-04-01 00:46:33 +02:00
|
|
|
bool have_names = false;
|
2008-12-18 19:20:35 +01:00
|
|
|
bool have_defaults = false;
|
2004-05-26 06:41:50 +02:00
|
|
|
ListCell *x;
|
2005-04-01 00:46:33 +02:00
|
|
|
int i;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
*variadicArgType = InvalidOid; /* default result */
|
2005-10-15 04:49:52 +02:00
|
|
|
*requiredResultType = InvalidOid; /* default result */
|
2005-09-25 00:54:44 +02:00
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
inTypes = (Oid *) palloc(parameterCount * sizeof(Oid));
|
|
|
|
allTypes = (Datum *) palloc(parameterCount * sizeof(Datum));
|
|
|
|
paramModes = (Datum *) palloc(parameterCount * sizeof(Datum));
|
|
|
|
paramNames = (Datum *) palloc0(parameterCount * sizeof(Datum));
|
2008-12-04 18:51:28 +01:00
|
|
|
*parameterDefaults = NIL;
|
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
/* Scan the list and extract data into work arrays */
|
|
|
|
i = 0;
|
|
|
|
foreach(x, parameters)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2004-01-07 00:55:19 +01:00
|
|
|
FunctionParameter *fp = (FunctionParameter *) lfirst(x);
|
|
|
|
TypeName *t = fp->argType;
|
2008-12-18 19:20:35 +01:00
|
|
|
bool isinput = false;
|
2002-04-15 07:22:04 +02:00
|
|
|
Oid toid;
|
2007-11-11 20:22:49 +01:00
|
|
|
Type typtup;
|
2011-12-19 23:05:19 +01:00
|
|
|
AclResult aclresult;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2014-01-23 18:40:29 +01:00
|
|
|
typtup = LookupTypeName(NULL, t, NULL, false);
|
2007-11-11 20:22:49 +01:00
|
|
|
if (typtup)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2007-11-11 20:22:49 +01:00
|
|
|
if (!((Form_pg_type) GETSTRUCT(typtup))->typisdefined)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2002-08-22 02:01:51 +02:00
|
|
|
/* As above, hard error if language is SQL */
|
2002-04-15 07:22:04 +02:00
|
|
|
if (languageOid == SQLlanguageId)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("SQL function cannot accept shell type %s",
|
|
|
|
TypeNameToString(t))));
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
/* We don't allow creating aggregates on shell types either */
|
2017-11-30 14:46:13 +01:00
|
|
|
else if (objtype == OBJECT_AGGREGATE)
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("aggregate cannot accept shell type %s",
|
|
|
|
TypeNameToString(t))));
|
2002-08-22 02:01:51 +02:00
|
|
|
else
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(NOTICE,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("argument type %s is only a shell",
|
|
|
|
TypeNameToString(t))));
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
2007-11-11 20:22:49 +01:00
|
|
|
toid = typeTypeId(typtup);
|
|
|
|
ReleaseSysCache(typtup);
|
2002-08-22 02:01:51 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
|
|
|
errmsg("type %s does not exist",
|
|
|
|
TypeNameToString(t))));
|
2007-11-11 20:22:49 +01:00
|
|
|
toid = InvalidOid; /* keep compiler quiet */
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
2011-12-19 23:05:19 +01:00
|
|
|
aclresult = pg_type_aclcheck(toid, GetUserId(), ACL_USAGE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2012-06-15 21:55:03 +02:00
|
|
|
aclcheck_error_type(aclresult, toid);
|
2011-12-19 23:05:19 +01:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
if (t->setof)
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (objtype == OBJECT_AGGREGATE)
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("aggregates cannot accept set arguments")));
|
2017-11-30 14:46:13 +01:00
|
|
|
else if (objtype == OBJECT_PROCEDURE)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("procedures cannot accept set arguments")));
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("functions cannot accept set arguments")));
|
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2017-11-30 14:46:13 +01:00
|
|
|
if (objtype == OBJECT_PROCEDURE)
|
|
|
|
{
|
2018-03-14 16:47:21 +01:00
|
|
|
if (fp->mode == FUNC_PARAM_OUT)
|
2017-11-30 14:46:13 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
2018-03-14 16:47:21 +01:00
|
|
|
(errmsg("procedures cannot have OUT arguments"),
|
|
|
|
errhint("INOUT arguments are permitted."))));
|
2017-11-30 14:46:13 +01:00
|
|
|
}
|
|
|
|
|
2008-07-18 05:32:53 +02:00
|
|
|
/* handle input parameters */
|
|
|
|
if (fp->mode != FUNC_PARAM_OUT && fp->mode != FUNC_PARAM_TABLE)
|
2008-07-16 03:30:23 +02:00
|
|
|
{
|
2008-07-18 05:32:53 +02:00
|
|
|
/* other input parameters can't follow a VARIADIC parameter */
|
2008-07-16 03:30:23 +02:00
|
|
|
if (varCount > 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("VARIADIC parameter must be the last input parameter")));
|
2005-04-01 00:46:33 +02:00
|
|
|
inTypes[inCount++] = toid;
|
2008-12-18 19:20:35 +01:00
|
|
|
isinput = true;
|
2008-07-16 03:30:23 +02:00
|
|
|
}
|
2005-04-01 00:46:33 +02:00
|
|
|
|
2008-07-18 05:32:53 +02:00
|
|
|
/* handle output parameters */
|
2008-07-16 03:30:23 +02:00
|
|
|
if (fp->mode != FUNC_PARAM_IN && fp->mode != FUNC_PARAM_VARIADIC)
|
2005-04-01 00:46:33 +02:00
|
|
|
{
|
2018-03-14 16:47:21 +01:00
|
|
|
if (objtype == OBJECT_PROCEDURE)
|
|
|
|
*requiredResultType = RECORDOID;
|
2018-04-26 20:47:16 +02:00
|
|
|
else if (outCount == 0) /* save first output param's type */
|
2005-04-01 00:46:33 +02:00
|
|
|
*requiredResultType = toid;
|
|
|
|
outCount++;
|
|
|
|
}
|
|
|
|
|
2008-07-16 03:30:23 +02:00
|
|
|
if (fp->mode == FUNC_PARAM_VARIADIC)
|
|
|
|
{
|
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
|
|
|
*variadicArgType = toid;
|
2008-07-16 03:30:23 +02:00
|
|
|
varCount++;
|
|
|
|
/* validate variadic parameter type */
|
|
|
|
switch (toid)
|
|
|
|
{
|
|
|
|
case ANYARRAYOID:
|
|
|
|
case ANYOID:
|
|
|
|
/* okay */
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
if (!OidIsValid(get_element_type(toid)))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("VARIADIC parameter must be an array")));
|
2008-07-16 03:30:23 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
allTypes[i] = ObjectIdGetDatum(toid);
|
2004-01-07 00:55:19 +01:00
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
paramModes[i] = CharGetDatum(fp->mode);
|
2004-01-07 00:55:19 +01:00
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
if (fp->name && fp->name[0])
|
|
|
|
{
|
2009-10-08 04:39:25 +02:00
|
|
|
ListCell *px;
|
|
|
|
|
|
|
|
/*
|
2010-02-17 05:19:41 +01:00
|
|
|
* As of Postgres 9.0 we disallow using the same name for two
|
2009-10-08 04:39:25 +02:00
|
|
|
* input or two output function parameters. Depending on the
|
|
|
|
* function's language, conflicting input and output names might
|
|
|
|
* be bad too, but we leave it to the PL to complain if so.
|
|
|
|
*/
|
|
|
|
foreach(px, parameters)
|
|
|
|
{
|
|
|
|
FunctionParameter *prevfp = (FunctionParameter *) lfirst(px);
|
|
|
|
|
|
|
|
if (prevfp == fp)
|
|
|
|
break;
|
|
|
|
/* pure in doesn't conflict with pure out */
|
|
|
|
if ((fp->mode == FUNC_PARAM_IN ||
|
|
|
|
fp->mode == FUNC_PARAM_VARIADIC) &&
|
|
|
|
(prevfp->mode == FUNC_PARAM_OUT ||
|
|
|
|
prevfp->mode == FUNC_PARAM_TABLE))
|
|
|
|
continue;
|
|
|
|
if ((prevfp->mode == FUNC_PARAM_IN ||
|
|
|
|
prevfp->mode == FUNC_PARAM_VARIADIC) &&
|
|
|
|
(fp->mode == FUNC_PARAM_OUT ||
|
|
|
|
fp->mode == FUNC_PARAM_TABLE))
|
|
|
|
continue;
|
|
|
|
if (prevfp->name && prevfp->name[0] &&
|
|
|
|
strcmp(prevfp->name, fp->name) == 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("parameter name \"%s\" used more than once",
|
|
|
|
fp->name)));
|
2009-10-08 04:39:25 +02:00
|
|
|
}
|
|
|
|
|
2008-03-25 23:42:46 +01:00
|
|
|
paramNames[i] = CStringGetTextDatum(fp->name);
|
2005-04-01 00:46:33 +02:00
|
|
|
have_names = true;
|
|
|
|
}
|
|
|
|
|
2008-12-04 18:51:28 +01:00
|
|
|
if (fp->defexpr)
|
|
|
|
{
|
2009-06-11 16:49:15 +02:00
|
|
|
Node *def;
|
2008-12-18 19:20:35 +01:00
|
|
|
|
|
|
|
if (!isinput)
|
2008-12-04 18:51:28 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("only input parameters can have default values")));
|
2008-12-18 19:20:35 +01:00
|
|
|
|
Centralize the logic for detecting misplaced aggregates, window funcs, etc.
Formerly we relied on checking after-the-fact to see if an expression
contained aggregates, window functions, or sub-selects when it shouldn't.
This is grotty, easily forgotten (indeed, we had forgotten to teach
DefineIndex about rejecting window functions), and none too efficient
since it requires extra traversals of the parse tree. To improve matters,
define an enum type that classifies all SQL sub-expressions, store it in
ParseState to show what kind of expression we are currently parsing, and
make transformAggregateCall, transformWindowFuncCall, and transformSubLink
check the expression type and throw error if the type indicates the
construct is disallowed. This allows removal of a large number of ad-hoc
checks scattered around the code base. The enum type is sufficiently
fine-grained that we can still produce error messages of at least the
same specificity as before.
Bringing these error checks together revealed that we'd been none too
consistent about phrasing of the error messages, so standardize the wording
a bit.
Also, rewrite checking of aggregate arguments so that it requires only one
traversal of the arguments, rather than up to three as before.
In passing, clean up some more comments left over from add_missing_from
support, and annotate some tests that I think are dead code now that that's
gone. (I didn't risk actually removing said dead code, though.)
2012-08-10 17:35:33 +02:00
|
|
|
def = transformExpr(pstate, fp->defexpr,
|
|
|
|
EXPR_KIND_FUNCTION_DEFAULT);
|
2008-12-18 19:20:35 +01:00
|
|
|
def = coerce_to_specific_type(pstate, def, toid, "DEFAULT");
|
2011-03-20 01:29:08 +01:00
|
|
|
assign_expr_collations(pstate, def);
|
2008-12-18 19:20:35 +01:00
|
|
|
|
|
|
|
/*
|
Centralize the logic for detecting misplaced aggregates, window funcs, etc.
Formerly we relied on checking after-the-fact to see if an expression
contained aggregates, window functions, or sub-selects when it shouldn't.
This is grotty, easily forgotten (indeed, we had forgotten to teach
DefineIndex about rejecting window functions), and none too efficient
since it requires extra traversals of the parse tree. To improve matters,
define an enum type that classifies all SQL sub-expressions, store it in
ParseState to show what kind of expression we are currently parsing, and
make transformAggregateCall, transformWindowFuncCall, and transformSubLink
check the expression type and throw error if the type indicates the
construct is disallowed. This allows removal of a large number of ad-hoc
checks scattered around the code base. The enum type is sufficiently
fine-grained that we can still produce error messages of at least the
same specificity as before.
Bringing these error checks together revealed that we'd been none too
consistent about phrasing of the error messages, so standardize the wording
a bit.
Also, rewrite checking of aggregate arguments so that it requires only one
traversal of the arguments, rather than up to three as before.
In passing, clean up some more comments left over from add_missing_from
support, and annotate some tests that I think are dead code now that that's
gone. (I didn't risk actually removing said dead code, though.)
2012-08-10 17:35:33 +02:00
|
|
|
* Make sure no variables are referred to (this is probably dead
|
|
|
|
* code now that add_missing_from is history).
|
2008-12-18 19:20:35 +01:00
|
|
|
*/
|
|
|
|
if (list_length(pstate->p_rtable) != 0 ||
|
|
|
|
contain_var_clause(def))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_COLUMN_REFERENCE),
|
|
|
|
errmsg("cannot use table references in parameter default value")));
|
|
|
|
|
|
|
|
/*
|
Centralize the logic for detecting misplaced aggregates, window funcs, etc.
Formerly we relied on checking after-the-fact to see if an expression
contained aggregates, window functions, or sub-selects when it shouldn't.
This is grotty, easily forgotten (indeed, we had forgotten to teach
DefineIndex about rejecting window functions), and none too efficient
since it requires extra traversals of the parse tree. To improve matters,
define an enum type that classifies all SQL sub-expressions, store it in
ParseState to show what kind of expression we are currently parsing, and
make transformAggregateCall, transformWindowFuncCall, and transformSubLink
check the expression type and throw error if the type indicates the
construct is disallowed. This allows removal of a large number of ad-hoc
checks scattered around the code base. The enum type is sufficiently
fine-grained that we can still produce error messages of at least the
same specificity as before.
Bringing these error checks together revealed that we'd been none too
consistent about phrasing of the error messages, so standardize the wording
a bit.
Also, rewrite checking of aggregate arguments so that it requires only one
traversal of the arguments, rather than up to three as before.
In passing, clean up some more comments left over from add_missing_from
support, and annotate some tests that I think are dead code now that that's
gone. (I didn't risk actually removing said dead code, though.)
2012-08-10 17:35:33 +02:00
|
|
|
* transformExpr() should have already rejected subqueries,
|
|
|
|
* aggregates, and window functions, based on the EXPR_KIND_ for a
|
|
|
|
* default expression.
|
|
|
|
*
|
2009-01-06 03:01:27 +01:00
|
|
|
* It can't return a set either --- but coerce_to_specific_type
|
|
|
|
* already checked that for us.
|
|
|
|
*
|
|
|
|
* Note: the point of these restrictions is to ensure that an
|
|
|
|
* expression that, on its face, hasn't got subplans, aggregates,
|
|
|
|
* etc cannot suddenly have them after function default arguments
|
|
|
|
* are inserted.
|
2008-12-18 19:20:35 +01:00
|
|
|
*/
|
2008-12-04 18:51:28 +01:00
|
|
|
|
2008-12-18 19:20:35 +01:00
|
|
|
*parameterDefaults = lappend(*parameterDefaults, def);
|
2008-12-04 18:51:28 +01:00
|
|
|
have_defaults = true;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2008-12-18 19:20:35 +01:00
|
|
|
if (isinput && have_defaults)
|
2008-12-04 18:51:28 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
2008-12-18 19:20:35 +01:00
|
|
|
errmsg("input parameters after one with a default value must also have defaults")));
|
2008-12-04 18:51:28 +01:00
|
|
|
}
|
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
i++;
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
/* Now construct the proper outputs as needed */
|
|
|
|
*parameterTypes = buildoidvector(inTypes, inCount);
|
|
|
|
|
2008-07-16 03:30:23 +02:00
|
|
|
if (outCount > 0 || varCount > 0)
|
2005-04-01 00:46:33 +02:00
|
|
|
{
|
|
|
|
*allParameterTypes = construct_array(allTypes, parameterCount, OIDOID,
|
|
|
|
sizeof(Oid), true, 'i');
|
|
|
|
*parameterModes = construct_array(paramModes, parameterCount, CHAROID,
|
|
|
|
1, true, 'c');
|
|
|
|
if (outCount > 1)
|
|
|
|
*requiredResultType = RECORDOID;
|
|
|
|
/* otherwise we set requiredResultType correctly above */
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
*allParameterTypes = NULL;
|
|
|
|
*parameterModes = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (have_names)
|
|
|
|
{
|
|
|
|
for (i = 0; i < parameterCount; i++)
|
|
|
|
{
|
|
|
|
if (paramNames[i] == PointerGetDatum(NULL))
|
2008-03-25 23:42:46 +01:00
|
|
|
paramNames[i] = CStringGetTextDatum("");
|
2005-04-01 00:46:33 +02:00
|
|
|
}
|
|
|
|
*parameterNames = construct_array(paramNames, parameterCount, TEXTOID,
|
|
|
|
-1, false, 'i');
|
|
|
|
}
|
|
|
|
else
|
|
|
|
*parameterNames = NULL;
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
|
2005-03-14 01:19:37 +01:00
|
|
|
/*
|
|
|
|
* Recognize one of the options that can be passed to both CREATE
|
|
|
|
* FUNCTION and ALTER FUNCTION and return it via one of the out
|
|
|
|
* parameters. Returns true if the passed option was recognized. If
|
|
|
|
* the out parameter we were going to assign to points to non-NULL,
|
2014-05-06 18:12:18 +02:00
|
|
|
* raise a duplicate-clause error. (We don't try to detect duplicate
|
2007-09-03 02:39:26 +02:00
|
|
|
* SET parameters though --- if you're redundant, the last one wins.)
|
2005-03-14 01:19:37 +01:00
|
|
|
*/
|
|
|
|
static bool
|
2016-09-06 18:00:00 +02:00
|
|
|
compute_common_attribute(ParseState *pstate,
|
2017-11-30 14:46:13 +01:00
|
|
|
bool is_procedure,
|
2016-09-06 18:00:00 +02:00
|
|
|
DefElem *defel,
|
2005-03-14 01:19:37 +01:00
|
|
|
DefElem **volatility_item,
|
|
|
|
DefElem **strict_item,
|
2007-01-22 02:35:23 +01:00
|
|
|
DefElem **security_item,
|
2012-02-14 04:20:27 +01:00
|
|
|
DefElem **leakproof_item,
|
2007-09-03 02:39:26 +02:00
|
|
|
List **set_items,
|
2007-01-22 02:35:23 +01:00
|
|
|
DefElem **cost_item,
|
2015-09-16 21:38:47 +02:00
|
|
|
DefElem **rows_item,
|
|
|
|
DefElem **parallel_item)
|
2005-03-14 01:19:37 +01:00
|
|
|
{
|
|
|
|
if (strcmp(defel->defname, "volatility") == 0)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
goto procedure_error;
|
2005-03-14 01:19:37 +01:00
|
|
|
if (*volatility_item)
|
|
|
|
goto duplicate_error;
|
|
|
|
|
|
|
|
*volatility_item = defel;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "strict") == 0)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
goto procedure_error;
|
2005-03-14 01:19:37 +01:00
|
|
|
if (*strict_item)
|
|
|
|
goto duplicate_error;
|
|
|
|
|
|
|
|
*strict_item = defel;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "security") == 0)
|
|
|
|
{
|
|
|
|
if (*security_item)
|
|
|
|
goto duplicate_error;
|
|
|
|
|
|
|
|
*security_item = defel;
|
|
|
|
}
|
2012-02-14 04:20:27 +01:00
|
|
|
else if (strcmp(defel->defname, "leakproof") == 0)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
goto procedure_error;
|
2012-02-14 04:20:27 +01:00
|
|
|
if (*leakproof_item)
|
|
|
|
goto duplicate_error;
|
|
|
|
|
|
|
|
*leakproof_item = defel;
|
|
|
|
}
|
2007-09-03 02:39:26 +02:00
|
|
|
else if (strcmp(defel->defname, "set") == 0)
|
|
|
|
{
|
|
|
|
*set_items = lappend(*set_items, defel->arg);
|
|
|
|
}
|
2007-01-22 02:35:23 +01:00
|
|
|
else if (strcmp(defel->defname, "cost") == 0)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
goto procedure_error;
|
2007-01-22 02:35:23 +01:00
|
|
|
if (*cost_item)
|
|
|
|
goto duplicate_error;
|
|
|
|
|
|
|
|
*cost_item = defel;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "rows") == 0)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
goto procedure_error;
|
2007-01-22 02:35:23 +01:00
|
|
|
if (*rows_item)
|
|
|
|
goto duplicate_error;
|
|
|
|
|
|
|
|
*rows_item = defel;
|
|
|
|
}
|
2015-09-16 21:38:47 +02:00
|
|
|
else if (strcmp(defel->defname, "parallel") == 0)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
goto procedure_error;
|
2015-09-16 21:38:47 +02:00
|
|
|
if (*parallel_item)
|
|
|
|
goto duplicate_error;
|
|
|
|
|
|
|
|
*parallel_item = defel;
|
|
|
|
}
|
2005-03-14 01:19:37 +01:00
|
|
|
else
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* Recognized an option */
|
|
|
|
return true;
|
|
|
|
|
|
|
|
duplicate_error:
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2005-10-15 04:49:52 +02:00
|
|
|
return false; /* keep compiler quiet */
|
2017-11-30 14:46:13 +01:00
|
|
|
|
|
|
|
procedure_error:
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("invalid attribute in procedure definition"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
|
|
|
return false;
|
2005-03-14 01:19:37 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
static char
|
|
|
|
interpret_func_volatility(DefElem *defel)
|
|
|
|
{
|
2005-10-15 04:49:52 +02:00
|
|
|
char *str = strVal(defel->arg);
|
2005-03-14 01:19:37 +01:00
|
|
|
|
|
|
|
if (strcmp(str, "immutable") == 0)
|
|
|
|
return PROVOLATILE_IMMUTABLE;
|
|
|
|
else if (strcmp(str, "stable") == 0)
|
|
|
|
return PROVOLATILE_STABLE;
|
|
|
|
else if (strcmp(str, "volatile") == 0)
|
|
|
|
return PROVOLATILE_VOLATILE;
|
|
|
|
else
|
|
|
|
{
|
|
|
|
elog(ERROR, "invalid volatility \"%s\"", str);
|
2005-10-15 04:49:52 +02:00
|
|
|
return 0; /* keep compiler quiet */
|
2005-03-14 01:19:37 +01:00
|
|
|
}
|
|
|
|
}
|
2002-05-17 20:32:52 +02:00
|
|
|
|
2015-09-16 21:38:47 +02:00
|
|
|
static char
|
|
|
|
interpret_func_parallel(DefElem *defel)
|
|
|
|
{
|
|
|
|
char *str = strVal(defel->arg);
|
|
|
|
|
|
|
|
if (strcmp(str, "safe") == 0)
|
|
|
|
return PROPARALLEL_SAFE;
|
|
|
|
else if (strcmp(str, "unsafe") == 0)
|
|
|
|
return PROPARALLEL_UNSAFE;
|
|
|
|
else if (strcmp(str, "restricted") == 0)
|
|
|
|
return PROPARALLEL_RESTRICTED;
|
|
|
|
else
|
|
|
|
{
|
|
|
|
ereport(ERROR,
|
2016-04-05 22:06:15 +02:00
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("parameter \"parallel\" must be SAFE, RESTRICTED, or UNSAFE")));
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
return PROPARALLEL_UNSAFE; /* keep compiler quiet */
|
2015-09-16 21:38:47 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-09-03 02:39:26 +02:00
|
|
|
/*
|
2007-09-03 20:46:30 +02:00
|
|
|
* Update a proconfig value according to a list of VariableSetStmt items.
|
2007-09-03 02:39:26 +02:00
|
|
|
*
|
|
|
|
* The input and result may be NULL to signify a null entry.
|
|
|
|
*/
|
|
|
|
static ArrayType *
|
|
|
|
update_proconfig_value(ArrayType *a, List *set_items)
|
|
|
|
{
|
|
|
|
ListCell *l;
|
|
|
|
|
|
|
|
foreach(l, set_items)
|
|
|
|
{
|
Improve castNode notation by introducing list-extraction-specific variants.
This extends the castNode() notation introduced by commit 5bcab1114 to
provide, in one step, extraction of a list cell's pointer and coercion to
a concrete node type. For example, "lfirst_node(Foo, lc)" is the same
as "castNode(Foo, lfirst(lc))". Almost half of the uses of castNode
that have appeared so far include a list extraction call, so this is
pretty widely useful, and it saves a few more keystrokes compared to the
old way.
As with the previous patch, back-patch the addition of these macros to
pg_list.h, so that the notation will be available when back-patching.
Patch by me, after an idea of Andrew Gierth's.
Discussion: https://postgr.es/m/14197.1491841216@sss.pgh.pa.us
2017-04-10 19:51:29 +02:00
|
|
|
VariableSetStmt *sstmt = lfirst_node(VariableSetStmt, l);
|
2007-09-03 02:39:26 +02:00
|
|
|
|
2007-09-03 20:46:30 +02:00
|
|
|
if (sstmt->kind == VAR_RESET_ALL)
|
|
|
|
a = NULL;
|
|
|
|
else
|
2007-09-03 02:39:26 +02:00
|
|
|
{
|
2007-09-03 20:46:30 +02:00
|
|
|
char *valuestr = ExtractSetVariableArgs(sstmt);
|
2007-09-03 02:39:26 +02:00
|
|
|
|
2007-09-03 20:46:30 +02:00
|
|
|
if (valuestr)
|
2007-09-03 02:39:26 +02:00
|
|
|
a = GUCArrayAdd(a, sstmt->name, valuestr);
|
2017-06-21 20:39:04 +02:00
|
|
|
else /* RESET */
|
2007-09-03 02:39:26 +02:00
|
|
|
a = GUCArrayDelete(a, sstmt->name);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return a;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2002-05-17 20:32:52 +02:00
|
|
|
/*
|
|
|
|
* Dissect the list of options assembled in gram.y into function
|
|
|
|
* attributes.
|
|
|
|
*/
|
|
|
|
static void
|
2018-01-26 18:25:44 +01:00
|
|
|
compute_function_attributes(ParseState *pstate,
|
|
|
|
bool is_procedure,
|
|
|
|
List *options,
|
|
|
|
List **as,
|
|
|
|
char **language,
|
|
|
|
Node **transform,
|
|
|
|
bool *windowfunc_p,
|
|
|
|
char *volatility_p,
|
|
|
|
bool *strict_p,
|
|
|
|
bool *security_definer,
|
|
|
|
bool *leakproof_p,
|
|
|
|
ArrayType **proconfig,
|
|
|
|
float4 *procost,
|
|
|
|
float4 *prorows,
|
|
|
|
char *parallel_p)
|
2002-05-17 20:32:52 +02:00
|
|
|
{
|
2004-05-26 06:41:50 +02:00
|
|
|
ListCell *option;
|
2002-09-04 22:31:48 +02:00
|
|
|
DefElem *as_item = NULL;
|
|
|
|
DefElem *language_item = NULL;
|
2015-04-26 16:33:14 +02:00
|
|
|
DefElem *transform_item = NULL;
|
2008-12-31 03:25:06 +01:00
|
|
|
DefElem *windowfunc_item = NULL;
|
2002-09-04 22:31:48 +02:00
|
|
|
DefElem *volatility_item = NULL;
|
|
|
|
DefElem *strict_item = NULL;
|
|
|
|
DefElem *security_item = NULL;
|
2012-02-14 04:20:27 +01:00
|
|
|
DefElem *leakproof_item = NULL;
|
2007-09-03 02:39:26 +02:00
|
|
|
List *set_items = NIL;
|
2007-01-22 02:35:23 +01:00
|
|
|
DefElem *cost_item = NULL;
|
|
|
|
DefElem *rows_item = NULL;
|
2015-09-16 21:38:47 +02:00
|
|
|
DefElem *parallel_item = NULL;
|
2002-05-17 20:32:52 +02:00
|
|
|
|
|
|
|
foreach(option, options)
|
|
|
|
{
|
|
|
|
DefElem *defel = (DefElem *) lfirst(option);
|
|
|
|
|
2002-09-04 22:31:48 +02:00
|
|
|
if (strcmp(defel->defname, "as") == 0)
|
2002-05-17 20:32:52 +02:00
|
|
|
{
|
|
|
|
if (as_item)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2002-05-17 20:32:52 +02:00
|
|
|
as_item = defel;
|
|
|
|
}
|
2002-09-04 22:31:48 +02:00
|
|
|
else if (strcmp(defel->defname, "language") == 0)
|
2002-05-17 20:32:52 +02:00
|
|
|
{
|
|
|
|
if (language_item)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2002-05-17 20:32:52 +02:00
|
|
|
language_item = defel;
|
|
|
|
}
|
2015-04-26 16:33:14 +02:00
|
|
|
else if (strcmp(defel->defname, "transform") == 0)
|
|
|
|
{
|
|
|
|
if (transform_item)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2015-04-26 16:33:14 +02:00
|
|
|
transform_item = defel;
|
|
|
|
}
|
2008-12-31 03:25:06 +01:00
|
|
|
else if (strcmp(defel->defname, "window") == 0)
|
|
|
|
{
|
|
|
|
if (windowfunc_item)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2017-11-30 14:46:13 +01:00
|
|
|
if (is_procedure)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("invalid attribute in procedure definition"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2008-12-31 03:25:06 +01:00
|
|
|
windowfunc_item = defel;
|
|
|
|
}
|
2016-09-06 18:00:00 +02:00
|
|
|
else if (compute_common_attribute(pstate,
|
2017-11-30 14:46:13 +01:00
|
|
|
is_procedure,
|
2016-09-06 18:00:00 +02:00
|
|
|
defel,
|
2005-03-14 01:19:37 +01:00
|
|
|
&volatility_item,
|
|
|
|
&strict_item,
|
2007-01-22 02:35:23 +01:00
|
|
|
&security_item,
|
2012-02-14 04:20:27 +01:00
|
|
|
&leakproof_item,
|
2007-09-03 02:39:26 +02:00
|
|
|
&set_items,
|
2007-01-22 02:35:23 +01:00
|
|
|
&cost_item,
|
2015-09-16 21:38:47 +02:00
|
|
|
&rows_item,
|
|
|
|
¶llel_item))
|
2002-05-17 20:32:52 +02:00
|
|
|
{
|
2005-03-14 01:19:37 +01:00
|
|
|
/* recognized common option */
|
|
|
|
continue;
|
2002-05-17 20:32:52 +02:00
|
|
|
}
|
|
|
|
else
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "option \"%s\" not recognized",
|
|
|
|
defel->defname);
|
2002-05-17 20:32:52 +02:00
|
|
|
}
|
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
/* process required items */
|
2002-05-17 20:32:52 +02:00
|
|
|
if (as_item)
|
2002-09-04 22:31:48 +02:00
|
|
|
*as = (List *) as_item->arg;
|
2002-05-17 20:32:52 +02:00
|
|
|
else
|
2005-09-25 00:54:44 +02:00
|
|
|
{
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("no function body specified")));
|
2005-09-25 00:54:44 +02:00
|
|
|
*as = NIL; /* keep compiler quiet */
|
|
|
|
}
|
2002-05-17 20:32:52 +02:00
|
|
|
|
|
|
|
if (language_item)
|
|
|
|
*language = strVal(language_item->arg);
|
|
|
|
else
|
2005-09-25 00:54:44 +02:00
|
|
|
{
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("no language specified")));
|
2005-09-25 00:54:44 +02:00
|
|
|
*language = NULL; /* keep compiler quiet */
|
|
|
|
}
|
2002-05-17 20:32:52 +02:00
|
|
|
|
2005-04-01 00:46:33 +02:00
|
|
|
/* process optional items */
|
2015-04-26 16:33:14 +02:00
|
|
|
if (transform_item)
|
|
|
|
*transform = transform_item->arg;
|
2008-12-31 03:25:06 +01:00
|
|
|
if (windowfunc_item)
|
|
|
|
*windowfunc_p = intVal(windowfunc_item->arg);
|
2002-05-17 20:32:52 +02:00
|
|
|
if (volatility_item)
|
2005-03-14 01:19:37 +01:00
|
|
|
*volatility_p = interpret_func_volatility(volatility_item);
|
2002-05-17 20:32:52 +02:00
|
|
|
if (strict_item)
|
|
|
|
*strict_p = intVal(strict_item->arg);
|
|
|
|
if (security_item)
|
|
|
|
*security_definer = intVal(security_item->arg);
|
2012-02-14 04:20:27 +01:00
|
|
|
if (leakproof_item)
|
|
|
|
*leakproof_p = intVal(leakproof_item->arg);
|
2007-09-03 02:39:26 +02:00
|
|
|
if (set_items)
|
|
|
|
*proconfig = update_proconfig_value(NULL, set_items);
|
2007-01-22 02:35:23 +01:00
|
|
|
if (cost_item)
|
|
|
|
{
|
|
|
|
*procost = defGetNumeric(cost_item);
|
|
|
|
if (*procost <= 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("COST must be positive")));
|
|
|
|
}
|
|
|
|
if (rows_item)
|
|
|
|
{
|
|
|
|
*prorows = defGetNumeric(rows_item);
|
|
|
|
if (*prorows <= 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("ROWS must be positive")));
|
|
|
|
}
|
2015-09-16 21:38:47 +02:00
|
|
|
if (parallel_item)
|
|
|
|
*parallel_p = interpret_func_parallel(parallel_item);
|
2002-05-17 20:32:52 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
|
|
|
* For a dynamically linked C language object, the form of the clause is
|
|
|
|
*
|
|
|
|
* AS <object file name> [, <link symbol name> ]
|
|
|
|
*
|
|
|
|
* In all other cases
|
|
|
|
*
|
|
|
|
* AS <object reference, or sql code>
|
|
|
|
*/
|
|
|
|
static void
|
2008-07-16 18:55:24 +02:00
|
|
|
interpret_AS_clause(Oid languageOid, const char *languageName,
|
|
|
|
char *funcname, List *as,
|
2002-04-15 07:22:04 +02:00
|
|
|
char **prosrc_str_p, char **probin_str_p)
|
|
|
|
{
|
|
|
|
Assert(as != NIL);
|
|
|
|
|
|
|
|
if (languageOid == ClanguageId)
|
|
|
|
{
|
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* For "C" language, store the file name in probin and, when given,
|
2014-05-06 18:12:18 +02:00
|
|
|
* the link symbol name in prosrc. If link symbol is omitted,
|
2008-07-16 18:55:24 +02:00
|
|
|
* substitute procedure name. We also allow link symbol to be
|
|
|
|
* specified as "-", since that was the habit in PG versions before
|
|
|
|
* 8.4, and there might be dump files out there that don't translate
|
|
|
|
* that back to "omitted".
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
2004-05-26 06:41:50 +02:00
|
|
|
*probin_str_p = strVal(linitial(as));
|
|
|
|
if (list_length(as) == 1)
|
2008-07-16 18:55:24 +02:00
|
|
|
*prosrc_str_p = funcname;
|
2002-04-15 07:22:04 +02:00
|
|
|
else
|
2008-07-16 18:55:24 +02:00
|
|
|
{
|
2002-04-15 07:22:04 +02:00
|
|
|
*prosrc_str_p = strVal(lsecond(as));
|
2008-07-16 18:55:24 +02:00
|
|
|
if (strcmp(*prosrc_str_p, "-") == 0)
|
|
|
|
*prosrc_str_p = funcname;
|
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Everything else wants the given string in prosrc. */
|
2004-05-26 06:41:50 +02:00
|
|
|
*prosrc_str_p = strVal(linitial(as));
|
2008-07-16 18:55:24 +02:00
|
|
|
*probin_str_p = NULL;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2004-05-26 06:41:50 +02:00
|
|
|
if (list_length(as) != 1)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("only one AS item needed for language \"%s\"",
|
|
|
|
languageName)));
|
2008-07-16 18:55:24 +02:00
|
|
|
|
|
|
|
if (languageOid == INTERNALlanguageId)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* In PostgreSQL versions before 6.5, the SQL name of the created
|
|
|
|
* function could not be different from the internal name, and
|
|
|
|
* "prosrc" wasn't used. So there is code out there that does
|
|
|
|
* CREATE FUNCTION xyz AS '' LANGUAGE internal. To preserve some
|
|
|
|
* modicum of backwards compatibility, accept an empty "prosrc"
|
|
|
|
* value as meaning the supplied SQL function name.
|
|
|
|
*/
|
|
|
|
if (strlen(*prosrc_str_p) == 0)
|
|
|
|
*prosrc_str_p = funcname;
|
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* CreateFunction
|
2018-01-26 18:25:44 +01:00
|
|
|
* Execute a CREATE FUNCTION (or CREATE PROCEDURE) utility statement.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2016-09-06 18:00:00 +02:00
|
|
|
CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
|
|
|
char *probin_str;
|
|
|
|
char *prosrc_str;
|
|
|
|
Oid prorettype;
|
|
|
|
bool returnsSet;
|
2002-05-17 20:32:52 +02:00
|
|
|
char *language;
|
2002-04-15 07:22:04 +02:00
|
|
|
Oid languageOid;
|
2002-05-22 19:21:02 +02:00
|
|
|
Oid languageValidator;
|
2015-04-26 16:33:14 +02:00
|
|
|
Node *transformDefElem = NULL;
|
2002-04-15 07:22:04 +02:00
|
|
|
char *funcname;
|
|
|
|
Oid namespaceId;
|
2002-04-27 05:45:03 +02:00
|
|
|
AclResult aclresult;
|
2005-04-01 00:46:33 +02:00
|
|
|
oidvector *parameterTypes;
|
|
|
|
ArrayType *allParameterTypes;
|
|
|
|
ArrayType *parameterModes;
|
|
|
|
ArrayType *parameterNames;
|
2008-12-18 19:20:35 +01:00
|
|
|
List *parameterDefaults;
|
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
|
|
|
Oid variadicArgType;
|
2015-04-26 16:33:14 +02:00
|
|
|
List *trftypes_list = NIL;
|
|
|
|
ArrayType *trftypes;
|
2005-04-01 00:46:33 +02:00
|
|
|
Oid requiredResultType;
|
2008-12-31 03:25:06 +01:00
|
|
|
bool isWindowFunc,
|
|
|
|
isStrict,
|
2012-02-14 04:20:27 +01:00
|
|
|
security,
|
|
|
|
isLeakProof;
|
2002-04-15 07:22:04 +02:00
|
|
|
char volatility;
|
2007-09-03 02:39:26 +02:00
|
|
|
ArrayType *proconfig;
|
2007-01-22 02:35:23 +01:00
|
|
|
float4 procost;
|
|
|
|
float4 prorows;
|
2002-04-15 07:22:04 +02:00
|
|
|
HeapTuple languageTuple;
|
|
|
|
Form_pg_language languageStruct;
|
2002-05-17 20:32:52 +02:00
|
|
|
List *as_clause;
|
2015-09-16 21:38:47 +02:00
|
|
|
char parallel;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/* Convert list of names to a name and namespace */
|
|
|
|
namespaceId = QualifiedNameGetCreationNamespace(stmt->funcname,
|
|
|
|
&funcname);
|
|
|
|
|
2002-04-27 05:45:03 +02:00
|
|
|
/* Check we have creation rights in target namespace */
|
|
|
|
aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(), ACL_CREATE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_SCHEMA,
|
2003-08-01 02:15:26 +02:00
|
|
|
get_namespace_name(namespaceId));
|
2002-04-27 05:45:03 +02:00
|
|
|
|
2018-01-26 18:25:44 +01:00
|
|
|
/* Set default attributes */
|
2008-12-31 03:25:06 +01:00
|
|
|
isWindowFunc = false;
|
2002-05-17 20:32:52 +02:00
|
|
|
isStrict = false;
|
2002-05-18 15:48:01 +02:00
|
|
|
security = false;
|
2012-02-14 04:20:27 +01:00
|
|
|
isLeakProof = false;
|
2002-05-17 20:32:52 +02:00
|
|
|
volatility = PROVOLATILE_VOLATILE;
|
2007-09-03 02:39:26 +02:00
|
|
|
proconfig = NULL;
|
2007-01-22 02:35:23 +01:00
|
|
|
procost = -1; /* indicates not set */
|
|
|
|
prorows = -1; /* indicates not set */
|
2015-09-16 21:38:47 +02:00
|
|
|
parallel = PROPARALLEL_UNSAFE;
|
2002-05-17 20:32:52 +02:00
|
|
|
|
2018-01-26 18:25:44 +01:00
|
|
|
/* Extract non-default attributes from stmt->options list */
|
|
|
|
compute_function_attributes(pstate,
|
|
|
|
stmt->is_procedure,
|
|
|
|
stmt->options,
|
|
|
|
&as_clause, &language, &transformDefElem,
|
|
|
|
&isWindowFunc, &volatility,
|
|
|
|
&isStrict, &security, &isLeakProof,
|
|
|
|
&proconfig, &procost, &prorows, ¶llel);
|
2002-05-17 20:32:52 +02:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/* Look up the language and validate permissions */
|
2011-11-17 20:20:13 +01:00
|
|
|
languageTuple = SearchSysCache1(LANGNAME, PointerGetDatum(language));
|
2002-04-15 07:22:04 +02:00
|
|
|
if (!HeapTupleIsValid(languageTuple))
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
2011-11-17 20:20:13 +01:00
|
|
|
errmsg("language \"%s\" does not exist", language),
|
|
|
|
(PLTemplateExists(language) ?
|
2018-04-27 19:42:03 +02:00
|
|
|
errhint("Use CREATE EXTENSION to load the language into the database.") : 0)));
|
2004-08-29 07:07:03 +02:00
|
|
|
|
2002-07-20 07:16:59 +02:00
|
|
|
languageOid = HeapTupleGetOid(languageTuple);
|
2002-04-15 07:22:04 +02:00
|
|
|
languageStruct = (Form_pg_language) GETSTRUCT(languageTuple);
|
|
|
|
|
2002-04-27 05:45:03 +02:00
|
|
|
if (languageStruct->lanpltrusted)
|
|
|
|
{
|
|
|
|
/* if trusted language, need USAGE privilege */
|
|
|
|
AclResult aclresult;
|
|
|
|
|
|
|
|
aclresult = pg_language_aclcheck(languageOid, GetUserId(), ACL_USAGE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_LANGUAGE,
|
2003-08-01 02:15:26 +02:00
|
|
|
NameStr(languageStruct->lanname));
|
2002-04-27 05:45:03 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* if untrusted language, must be superuser */
|
|
|
|
if (!superuser())
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NO_PRIV, OBJECT_LANGUAGE,
|
2003-08-01 02:15:26 +02:00
|
|
|
NameStr(languageStruct->lanname));
|
2002-04-27 05:45:03 +02:00
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2002-05-22 19:21:02 +02:00
|
|
|
languageValidator = languageStruct->lanvalidator;
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
ReleaseSysCache(languageTuple);
|
|
|
|
|
2012-02-14 04:20:27 +01:00
|
|
|
/*
|
2015-05-24 03:35:49 +02:00
|
|
|
* Only superuser is allowed to create leakproof functions because
|
|
|
|
* leakproof functions can see tuples which have not yet been filtered out
|
|
|
|
* by security barrier views or row level security policies.
|
2012-02-14 04:20:27 +01:00
|
|
|
*/
|
|
|
|
if (isLeakProof && !superuser())
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
|
|
|
errmsg("only superuser can define a leakproof function")));
|
|
|
|
|
2015-04-26 16:33:14 +02:00
|
|
|
if (transformDefElem)
|
|
|
|
{
|
2015-05-24 03:35:49 +02:00
|
|
|
ListCell *lc;
|
2015-04-26 16:33:14 +02:00
|
|
|
|
2017-01-27 01:47:03 +01:00
|
|
|
foreach(lc, castNode(List, transformDefElem))
|
2015-04-26 16:33:14 +02:00
|
|
|
{
|
Improve castNode notation by introducing list-extraction-specific variants.
This extends the castNode() notation introduced by commit 5bcab1114 to
provide, in one step, extraction of a list cell's pointer and coercion to
a concrete node type. For example, "lfirst_node(Foo, lc)" is the same
as "castNode(Foo, lfirst(lc))". Almost half of the uses of castNode
that have appeared so far include a list extraction call, so this is
pretty widely useful, and it saves a few more keystrokes compared to the
old way.
As with the previous patch, back-patch the addition of these macros to
pg_list.h, so that the notation will be available when back-patching.
Patch by me, after an idea of Andrew Gierth's.
Discussion: https://postgr.es/m/14197.1491841216@sss.pgh.pa.us
2017-04-10 19:51:29 +02:00
|
|
|
Oid typeid = typenameTypeId(NULL,
|
|
|
|
lfirst_node(TypeName, lc));
|
2015-05-24 03:35:49 +02:00
|
|
|
Oid elt = get_base_element_type(typeid);
|
|
|
|
|
2015-04-26 16:33:14 +02:00
|
|
|
typeid = elt ? elt : typeid;
|
|
|
|
|
|
|
|
get_transform_oid(typeid, languageOid, false);
|
|
|
|
trftypes_list = lappend_oid(trftypes_list, typeid);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
|
|
|
* Convert remaining parameters of CREATE to form wanted by
|
|
|
|
* ProcedureCreate.
|
|
|
|
*/
|
2016-09-06 18:00:00 +02:00
|
|
|
interpret_function_parameter_list(pstate,
|
|
|
|
stmt->parameters,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
languageOid,
|
2017-11-30 14:46:13 +01:00
|
|
|
stmt->is_procedure ? OBJECT_PROCEDURE : OBJECT_FUNCTION,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
¶meterTypes,
|
|
|
|
&allParameterTypes,
|
|
|
|
¶meterModes,
|
|
|
|
¶meterNames,
|
|
|
|
¶meterDefaults,
|
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
|
|
|
&variadicArgType,
|
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
|
|
|
&requiredResultType);
|
2005-04-01 00:46:33 +02:00
|
|
|
|
2017-11-30 14:46:13 +01:00
|
|
|
if (stmt->is_procedure)
|
|
|
|
{
|
|
|
|
Assert(!stmt->returnType);
|
2018-03-14 16:47:21 +01:00
|
|
|
prorettype = requiredResultType ? requiredResultType : VOIDOID;
|
2017-11-30 14:46:13 +01:00
|
|
|
returnsSet = false;
|
|
|
|
}
|
|
|
|
else if (stmt->returnType)
|
2005-04-01 00:46:33 +02:00
|
|
|
{
|
|
|
|
/* explicit RETURNS clause */
|
|
|
|
compute_return_type(stmt->returnType, languageOid,
|
|
|
|
&prorettype, &returnsSet);
|
|
|
|
if (OidIsValid(requiredResultType) && prorettype != requiredResultType)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("function result type must be %s because of OUT parameters",
|
|
|
|
format_type_be(requiredResultType))));
|
|
|
|
}
|
|
|
|
else if (OidIsValid(requiredResultType))
|
|
|
|
{
|
|
|
|
/* default RETURNS clause from OUT parameters */
|
|
|
|
prorettype = requiredResultType;
|
|
|
|
returnsSet = false;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),
|
|
|
|
errmsg("function result type must be specified")));
|
|
|
|
/* Alternative possibility: default to RETURNS VOID */
|
|
|
|
prorettype = VOIDOID;
|
|
|
|
returnsSet = false;
|
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2015-04-26 16:33:14 +02:00
|
|
|
if (list_length(trftypes_list) > 0)
|
|
|
|
{
|
2015-05-24 03:35:49 +02:00
|
|
|
ListCell *lc;
|
|
|
|
Datum *arr;
|
|
|
|
int i;
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
arr = palloc(list_length(trftypes_list) * sizeof(Datum));
|
|
|
|
i = 0;
|
2015-05-24 03:35:49 +02:00
|
|
|
foreach(lc, trftypes_list)
|
2015-04-26 16:33:14 +02:00
|
|
|
arr[i++] = ObjectIdGetDatum(lfirst_oid(lc));
|
|
|
|
trftypes = construct_array(arr, list_length(trftypes_list),
|
|
|
|
OIDOID, sizeof(Oid), true, 'i');
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2017-02-06 10:33:58 +01:00
|
|
|
/* store SQL NULL instead of empty array */
|
2015-04-26 16:33:14 +02:00
|
|
|
trftypes = NULL;
|
|
|
|
}
|
|
|
|
|
2011-11-17 20:20:13 +01:00
|
|
|
interpret_AS_clause(languageOid, language, funcname, as_clause,
|
2002-04-15 07:22:04 +02:00
|
|
|
&prosrc_str, &probin_str);
|
|
|
|
|
2007-01-22 02:35:23 +01:00
|
|
|
/*
|
|
|
|
* Set default values for COST and ROWS depending on other parameters;
|
|
|
|
* reject ROWS if it's not returnsSet. NB: pg_dump knows these default
|
|
|
|
* values, keep it in sync if you change them.
|
|
|
|
*/
|
|
|
|
if (procost < 0)
|
|
|
|
{
|
|
|
|
/* SQL and PL-language functions are assumed more expensive */
|
|
|
|
if (languageOid == INTERNALlanguageId ||
|
|
|
|
languageOid == ClanguageId)
|
|
|
|
procost = 1;
|
|
|
|
else
|
|
|
|
procost = 100;
|
|
|
|
}
|
|
|
|
if (prorows < 0)
|
|
|
|
{
|
|
|
|
if (returnsSet)
|
|
|
|
prorows = 1000;
|
|
|
|
else
|
|
|
|
prorows = 0; /* dummy value if not returnsSet */
|
|
|
|
}
|
|
|
|
else if (!returnsSet)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("ROWS is not applicable when function does not return a set")));
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* And now that we have all the parameters, and know we're permitted to do
|
|
|
|
* so, go ahead and create the function.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
2012-12-24 00:25:03 +01:00
|
|
|
return ProcedureCreate(funcname,
|
|
|
|
namespaceId,
|
|
|
|
stmt->replace,
|
|
|
|
returnsSet,
|
|
|
|
prorettype,
|
|
|
|
GetUserId(),
|
|
|
|
languageOid,
|
|
|
|
languageValidator,
|
2013-05-29 22:58:43 +02:00
|
|
|
prosrc_str, /* converted to text later */
|
|
|
|
probin_str, /* converted to text later */
|
2018-03-02 14:57:38 +01:00
|
|
|
stmt->is_procedure ? PROKIND_PROCEDURE : (isWindowFunc ? PROKIND_WINDOW : PROKIND_FUNCTION),
|
2012-12-24 00:25:03 +01:00
|
|
|
security,
|
|
|
|
isLeakProof,
|
|
|
|
isStrict,
|
|
|
|
volatility,
|
2015-09-16 21:38:47 +02:00
|
|
|
parallel,
|
2012-12-24 00:25:03 +01:00
|
|
|
parameterTypes,
|
|
|
|
PointerGetDatum(allParameterTypes),
|
|
|
|
PointerGetDatum(parameterModes),
|
|
|
|
PointerGetDatum(parameterNames),
|
|
|
|
parameterDefaults,
|
2015-04-26 16:33:14 +02:00
|
|
|
PointerGetDatum(trftypes),
|
2012-12-24 00:25:03 +01:00
|
|
|
PointerGetDatum(proconfig),
|
|
|
|
procost,
|
|
|
|
prorows);
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
2002-07-12 20:43:19 +02:00
|
|
|
/*
|
|
|
|
* Guts of function deletion.
|
|
|
|
*
|
|
|
|
* Note: this is also used for aggregate deletion, since the OIDs of
|
|
|
|
* both functions and aggregates point to pg_proc.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
RemoveFunctionById(Oid funcOid)
|
|
|
|
{
|
|
|
|
Relation relation;
|
|
|
|
HeapTuple tup;
|
2018-03-02 14:57:38 +01:00
|
|
|
char prokind;
|
2002-07-12 20:43:19 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Delete the pg_proc tuple.
|
|
|
|
*/
|
2005-04-14 22:03:27 +02:00
|
|
|
relation = heap_open(ProcedureRelationId, RowExclusiveLock);
|
2002-07-12 20:43:19 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcOid));
|
2002-09-04 22:31:48 +02:00
|
|
|
if (!HeapTupleIsValid(tup)) /* should not happen */
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "cache lookup failed for function %u", funcOid);
|
2002-07-12 20:43:19 +02:00
|
|
|
|
2018-03-02 14:57:38 +01:00
|
|
|
prokind = ((Form_pg_proc) GETSTRUCT(tup))->prokind;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2017-02-01 22:13:30 +01:00
|
|
|
CatalogTupleDelete(relation, &tup->t_self);
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
ReleaseSysCache(tup);
|
|
|
|
|
|
|
|
heap_close(relation, RowExclusiveLock);
|
2002-07-12 20:43:19 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If there's a pg_aggregate tuple, delete that too.
|
|
|
|
*/
|
2018-03-02 14:57:38 +01:00
|
|
|
if (prokind == PROKIND_AGGREGATE)
|
2002-07-12 20:43:19 +02:00
|
|
|
{
|
2005-04-14 22:03:27 +02:00
|
|
|
relation = heap_open(AggregateRelationId, RowExclusiveLock);
|
2002-07-12 20:43:19 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCache1(AGGFNOID, ObjectIdGetDatum(funcOid));
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
if (!HeapTupleIsValid(tup)) /* should not happen */
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "cache lookup failed for pg_aggregate tuple for function %u", funcOid);
|
2002-07-12 20:43:19 +02:00
|
|
|
|
2017-02-01 22:13:30 +01:00
|
|
|
CatalogTupleDelete(relation, &tup->t_self);
|
2002-07-12 20:43:19 +02:00
|
|
|
|
|
|
|
ReleaseSysCache(tup);
|
|
|
|
|
|
|
|
heap_close(relation, RowExclusiveLock);
|
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
2002-07-19 01:11:32 +02:00
|
|
|
|
2005-03-14 01:19:37 +01:00
|
|
|
/*
|
|
|
|
* Implements the ALTER FUNCTION utility command (except for the
|
|
|
|
* RENAME and OWNER clauses, which are handled as part of the generic
|
|
|
|
* ALTER framework).
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2016-09-06 18:00:00 +02:00
|
|
|
AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt)
|
2005-03-14 01:19:37 +01:00
|
|
|
{
|
2005-10-15 04:49:52 +02:00
|
|
|
HeapTuple tup;
|
|
|
|
Oid funcOid;
|
2005-03-14 01:19:37 +01:00
|
|
|
Form_pg_proc procForm;
|
2017-11-30 14:46:13 +01:00
|
|
|
bool is_procedure;
|
2005-10-15 04:49:52 +02:00
|
|
|
Relation rel;
|
|
|
|
ListCell *l;
|
|
|
|
DefElem *volatility_item = NULL;
|
|
|
|
DefElem *strict_item = NULL;
|
|
|
|
DefElem *security_def_item = NULL;
|
2012-02-14 04:20:27 +01:00
|
|
|
DefElem *leakproof_item = NULL;
|
2007-09-03 02:39:26 +02:00
|
|
|
List *set_items = NIL;
|
2007-01-22 02:35:23 +01:00
|
|
|
DefElem *cost_item = NULL;
|
|
|
|
DefElem *rows_item = NULL;
|
2015-09-16 21:38:47 +02:00
|
|
|
DefElem *parallel_item = NULL;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2005-03-14 01:19:37 +01:00
|
|
|
|
2005-04-14 22:03:27 +02:00
|
|
|
rel = heap_open(ProcedureRelationId, RowExclusiveLock);
|
2005-03-14 01:19:37 +01:00
|
|
|
|
2017-11-30 14:46:13 +01:00
|
|
|
funcOid = LookupFuncWithArgs(stmt->objtype, stmt->func, false);
|
2005-03-14 01:19:37 +01:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCacheCopy1(PROCOID, ObjectIdGetDatum(funcOid));
|
2005-03-14 01:19:37 +01:00
|
|
|
if (!HeapTupleIsValid(tup)) /* should not happen */
|
|
|
|
elog(ERROR, "cache lookup failed for function %u", funcOid);
|
|
|
|
|
|
|
|
procForm = (Form_pg_proc) GETSTRUCT(tup);
|
2004-06-25 23:55:59 +02:00
|
|
|
|
2005-03-14 01:19:37 +01:00
|
|
|
/* Permission check: must own function */
|
|
|
|
if (!pg_proc_ownercheck(funcOid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, stmt->objtype,
|
2016-12-28 18:00:00 +01:00
|
|
|
NameListToString(stmt->func->objname));
|
2005-03-14 01:19:37 +01:00
|
|
|
|
2018-03-02 14:57:38 +01:00
|
|
|
if (procForm->prokind == PROKIND_AGGREGATE)
|
2005-03-14 01:19:37 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("\"%s\" is an aggregate function",
|
2016-12-28 18:00:00 +01:00
|
|
|
NameListToString(stmt->func->objname))));
|
2005-03-14 01:19:37 +01:00
|
|
|
|
2018-03-02 14:57:38 +01:00
|
|
|
is_procedure = (procForm->prokind == PROKIND_PROCEDURE);
|
2017-11-30 14:46:13 +01:00
|
|
|
|
2005-03-14 01:19:37 +01:00
|
|
|
/* Examine requested actions. */
|
2005-10-15 04:49:52 +02:00
|
|
|
foreach(l, stmt->actions)
|
2005-03-14 01:19:37 +01:00
|
|
|
{
|
2005-10-15 04:49:52 +02:00
|
|
|
DefElem *defel = (DefElem *) lfirst(l);
|
2005-03-14 01:19:37 +01:00
|
|
|
|
2016-09-06 18:00:00 +02:00
|
|
|
if (compute_common_attribute(pstate,
|
2017-11-30 14:46:13 +01:00
|
|
|
is_procedure,
|
2016-09-06 18:00:00 +02:00
|
|
|
defel,
|
2005-03-14 01:19:37 +01:00
|
|
|
&volatility_item,
|
|
|
|
&strict_item,
|
2007-01-22 02:35:23 +01:00
|
|
|
&security_def_item,
|
2012-02-14 04:20:27 +01:00
|
|
|
&leakproof_item,
|
2007-09-03 02:39:26 +02:00
|
|
|
&set_items,
|
2007-01-22 02:35:23 +01:00
|
|
|
&cost_item,
|
2015-09-16 21:38:47 +02:00
|
|
|
&rows_item,
|
|
|
|
¶llel_item) == false)
|
2005-03-14 01:19:37 +01:00
|
|
|
elog(ERROR, "option \"%s\" not recognized", defel->defname);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (volatility_item)
|
|
|
|
procForm->provolatile = interpret_func_volatility(volatility_item);
|
|
|
|
if (strict_item)
|
|
|
|
procForm->proisstrict = intVal(strict_item->arg);
|
|
|
|
if (security_def_item)
|
|
|
|
procForm->prosecdef = intVal(security_def_item->arg);
|
2012-02-14 04:20:27 +01:00
|
|
|
if (leakproof_item)
|
|
|
|
{
|
2015-05-28 17:24:37 +02:00
|
|
|
procForm->proleakproof = intVal(leakproof_item->arg);
|
|
|
|
if (procForm->proleakproof && !superuser())
|
2012-02-14 04:20:27 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("only superuser can define a leakproof function")));
|
2012-02-14 04:20:27 +01:00
|
|
|
}
|
2007-01-22 02:35:23 +01:00
|
|
|
if (cost_item)
|
|
|
|
{
|
|
|
|
procForm->procost = defGetNumeric(cost_item);
|
|
|
|
if (procForm->procost <= 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("COST must be positive")));
|
|
|
|
}
|
|
|
|
if (rows_item)
|
|
|
|
{
|
|
|
|
procForm->prorows = defGetNumeric(rows_item);
|
|
|
|
if (procForm->prorows <= 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("ROWS must be positive")));
|
|
|
|
if (!procForm->proretset)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("ROWS is not applicable when function does not return a set")));
|
|
|
|
}
|
2007-09-03 02:39:26 +02:00
|
|
|
if (set_items)
|
|
|
|
{
|
|
|
|
Datum datum;
|
|
|
|
bool isnull;
|
|
|
|
ArrayType *a;
|
|
|
|
Datum repl_val[Natts_pg_proc];
|
2008-11-02 02:45:28 +01:00
|
|
|
bool repl_null[Natts_pg_proc];
|
|
|
|
bool repl_repl[Natts_pg_proc];
|
2007-09-03 02:39:26 +02:00
|
|
|
|
|
|
|
/* extract existing proconfig setting */
|
|
|
|
datum = SysCacheGetAttr(PROCOID, tup, Anum_pg_proc_proconfig, &isnull);
|
|
|
|
a = isnull ? NULL : DatumGetArrayTypeP(datum);
|
|
|
|
|
|
|
|
/* update according to each SET or RESET item, left to right */
|
|
|
|
a = update_proconfig_value(a, set_items);
|
|
|
|
|
|
|
|
/* update the tuple */
|
2008-11-02 02:45:28 +01:00
|
|
|
memset(repl_repl, false, sizeof(repl_repl));
|
|
|
|
repl_repl[Anum_pg_proc_proconfig - 1] = true;
|
2007-09-03 02:39:26 +02:00
|
|
|
|
|
|
|
if (a == NULL)
|
|
|
|
{
|
|
|
|
repl_val[Anum_pg_proc_proconfig - 1] = (Datum) 0;
|
2008-11-02 02:45:28 +01:00
|
|
|
repl_null[Anum_pg_proc_proconfig - 1] = true;
|
2007-09-03 02:39:26 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
repl_val[Anum_pg_proc_proconfig - 1] = PointerGetDatum(a);
|
2008-11-02 02:45:28 +01:00
|
|
|
repl_null[Anum_pg_proc_proconfig - 1] = false;
|
2007-09-03 02:39:26 +02:00
|
|
|
}
|
|
|
|
|
2008-11-02 02:45:28 +01:00
|
|
|
tup = heap_modify_tuple(tup, RelationGetDescr(rel),
|
2009-06-11 16:49:15 +02:00
|
|
|
repl_val, repl_null, repl_repl);
|
2007-09-03 02:39:26 +02:00
|
|
|
}
|
2015-09-16 21:38:47 +02:00
|
|
|
if (parallel_item)
|
|
|
|
procForm->proparallel = interpret_func_parallel(parallel_item);
|
2005-03-14 01:19:37 +01:00
|
|
|
|
|
|
|
/* Do the update */
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(rel, &tup->t_self, tup);
|
2005-03-14 01:19:37 +01:00
|
|
|
|
2013-03-18 03:55:14 +01:00
|
|
|
InvokeObjectPostAlterHook(ProcedureRelationId, funcOid, 0);
|
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddressSet(address, ProcedureRelationId, funcOid);
|
|
|
|
|
2005-03-14 01:19:37 +01:00
|
|
|
heap_close(rel, NoLock);
|
|
|
|
heap_freetuple(tup);
|
2012-12-29 13:55:37 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2005-03-14 01:19:37 +01:00
|
|
|
}
|
2003-06-27 16:45:32 +02:00
|
|
|
|
2002-09-21 20:39:26 +02:00
|
|
|
/*
|
|
|
|
* SetFunctionReturnType - change declared return type of a function
|
|
|
|
*
|
|
|
|
* This is presently only used for adjusting legacy functions that return
|
|
|
|
* OPAQUE to return whatever we find their correct definition should be.
|
2003-10-02 08:34:04 +02:00
|
|
|
* The caller should emit a suitable warning explaining what we did.
|
2002-09-21 20:39:26 +02:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
SetFunctionReturnType(Oid funcOid, Oid newRetType)
|
|
|
|
{
|
|
|
|
Relation pg_proc_rel;
|
|
|
|
HeapTuple tup;
|
|
|
|
Form_pg_proc procForm;
|
2017-06-16 10:33:12 +02:00
|
|
|
ObjectAddress func_address;
|
|
|
|
ObjectAddress type_address;
|
2002-09-21 20:39:26 +02:00
|
|
|
|
2005-04-14 22:03:27 +02:00
|
|
|
pg_proc_rel = heap_open(ProcedureRelationId, RowExclusiveLock);
|
2002-09-21 20:39:26 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCacheCopy1(PROCOID, ObjectIdGetDatum(funcOid));
|
2002-09-21 20:39:26 +02:00
|
|
|
if (!HeapTupleIsValid(tup)) /* should not happen */
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "cache lookup failed for function %u", funcOid);
|
2002-09-21 20:39:26 +02:00
|
|
|
procForm = (Form_pg_proc) GETSTRUCT(tup);
|
|
|
|
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
if (procForm->prorettype != OPAQUEOID) /* caller messed up */
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "function %u doesn't return OPAQUE", funcOid);
|
2002-09-21 20:39:26 +02:00
|
|
|
|
|
|
|
/* okay to overwrite copied tuple */
|
|
|
|
procForm->prorettype = newRetType;
|
|
|
|
|
|
|
|
/* update the catalog and its indexes */
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(pg_proc_rel, &tup->t_self, tup);
|
2002-09-21 20:39:26 +02:00
|
|
|
|
|
|
|
heap_close(pg_proc_rel, RowExclusiveLock);
|
2017-06-16 10:33:12 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Also update the dependency to the new type. Opaque is a pinned type, so
|
|
|
|
* there is no old dependency record for it that we would need to remove.
|
|
|
|
*/
|
|
|
|
ObjectAddressSet(type_address, TypeRelationId, newRetType);
|
|
|
|
ObjectAddressSet(func_address, ProcedureRelationId, funcOid);
|
|
|
|
recordDependencyOn(&func_address, &type_address, DEPENDENCY_NORMAL);
|
2002-09-21 20:39:26 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* SetFunctionArgType - change declared argument type of a function
|
|
|
|
*
|
|
|
|
* As above, but change an argument's type.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
SetFunctionArgType(Oid funcOid, int argIndex, Oid newArgType)
|
|
|
|
{
|
|
|
|
Relation pg_proc_rel;
|
|
|
|
HeapTuple tup;
|
|
|
|
Form_pg_proc procForm;
|
2017-06-16 10:33:12 +02:00
|
|
|
ObjectAddress func_address;
|
|
|
|
ObjectAddress type_address;
|
2002-09-21 20:39:26 +02:00
|
|
|
|
2005-04-14 22:03:27 +02:00
|
|
|
pg_proc_rel = heap_open(ProcedureRelationId, RowExclusiveLock);
|
2002-09-21 20:39:26 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCacheCopy1(PROCOID, ObjectIdGetDatum(funcOid));
|
2002-09-21 20:39:26 +02:00
|
|
|
if (!HeapTupleIsValid(tup)) /* should not happen */
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "cache lookup failed for function %u", funcOid);
|
2002-09-21 20:39:26 +02:00
|
|
|
procForm = (Form_pg_proc) GETSTRUCT(tup);
|
|
|
|
|
|
|
|
if (argIndex < 0 || argIndex >= procForm->pronargs ||
|
2005-03-29 02:17:27 +02:00
|
|
|
procForm->proargtypes.values[argIndex] != OPAQUEOID)
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "function %u doesn't take OPAQUE", funcOid);
|
2002-09-21 20:39:26 +02:00
|
|
|
|
|
|
|
/* okay to overwrite copied tuple */
|
2005-03-29 02:17:27 +02:00
|
|
|
procForm->proargtypes.values[argIndex] = newArgType;
|
2002-09-21 20:39:26 +02:00
|
|
|
|
|
|
|
/* update the catalog and its indexes */
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(pg_proc_rel, &tup->t_self, tup);
|
2002-09-21 20:39:26 +02:00
|
|
|
|
|
|
|
heap_close(pg_proc_rel, RowExclusiveLock);
|
2017-06-16 10:33:12 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Also update the dependency to the new type. Opaque is a pinned type, so
|
|
|
|
* there is no old dependency record for it that we would need to remove.
|
|
|
|
*/
|
|
|
|
ObjectAddressSet(type_address, TypeRelationId, newArgType);
|
|
|
|
ObjectAddressSet(func_address, ProcedureRelationId, funcOid);
|
|
|
|
recordDependencyOn(&func_address, &type_address, DEPENDENCY_NORMAL);
|
2002-09-21 20:39:26 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2002-07-19 01:11:32 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* CREATE CAST
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2002-07-19 01:11:32 +02:00
|
|
|
CreateCast(CreateCastStmt *stmt)
|
|
|
|
{
|
|
|
|
Oid sourcetypeid;
|
|
|
|
Oid targettypeid;
|
2009-03-04 12:53:53 +01:00
|
|
|
char sourcetyptype;
|
|
|
|
char targettyptype;
|
2002-07-19 01:11:32 +02:00
|
|
|
Oid funcid;
|
2011-07-23 22:59:39 +02:00
|
|
|
Oid castid;
|
2004-06-16 03:27:00 +02:00
|
|
|
int nargs;
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
char castcontext;
|
2008-10-31 09:39:22 +01:00
|
|
|
char castmethod;
|
2002-07-19 01:11:32 +02:00
|
|
|
Relation relation;
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
HeapTuple tuple;
|
|
|
|
Datum values[Natts_pg_cast];
|
2008-11-02 02:45:28 +01:00
|
|
|
bool nulls[Natts_pg_cast];
|
2002-07-19 01:11:32 +02:00
|
|
|
ObjectAddress myself,
|
2002-09-04 22:31:48 +02:00
|
|
|
referenced;
|
2011-12-19 23:05:19 +01:00
|
|
|
AclResult aclresult;
|
2002-07-19 01:11:32 +02:00
|
|
|
|
2010-10-25 20:40:46 +02:00
|
|
|
sourcetypeid = typenameTypeId(NULL, stmt->sourcetype);
|
|
|
|
targettypeid = typenameTypeId(NULL, stmt->targettype);
|
2009-03-04 12:53:53 +01:00
|
|
|
sourcetyptype = get_typtype(sourcetypeid);
|
|
|
|
targettyptype = get_typtype(targettypeid);
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2006-03-14 23:48:25 +01:00
|
|
|
/* No pseudo-types allowed */
|
2009-03-04 12:53:53 +01:00
|
|
|
if (sourcetyptype == TYPTYPE_PSEUDO)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("source data type %s is a pseudo-type",
|
|
|
|
TypeNameToString(stmt->sourcetype))));
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2009-03-04 12:53:53 +01:00
|
|
|
if (targettyptype == TYPTYPE_PSEUDO)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("target data type %s is a pseudo-type",
|
|
|
|
TypeNameToString(stmt->targettype))));
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2003-07-19 01:20:33 +02:00
|
|
|
/* Permission check */
|
2002-08-11 19:44:12 +02:00
|
|
|
if (!pg_type_ownercheck(sourcetypeid, GetUserId())
|
|
|
|
&& !pg_type_ownercheck(targettypeid, GetUserId()))
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
|
|
|
errmsg("must be owner of type %s or type %s",
|
2008-10-21 12:38:51 +02:00
|
|
|
format_type_be(sourcetypeid),
|
|
|
|
format_type_be(targettypeid))));
|
2002-08-11 19:44:12 +02:00
|
|
|
|
2011-12-19 23:05:19 +01:00
|
|
|
aclresult = pg_type_aclcheck(sourcetypeid, GetUserId(), ACL_USAGE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2012-06-15 21:55:03 +02:00
|
|
|
aclcheck_error_type(aclresult, sourcetypeid);
|
2011-12-19 23:05:19 +01:00
|
|
|
|
|
|
|
aclresult = pg_type_aclcheck(targettypeid, GetUserId(), ACL_USAGE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2012-06-15 21:55:03 +02:00
|
|
|
aclcheck_error_type(aclresult, targettypeid);
|
2011-12-19 23:05:19 +01:00
|
|
|
|
2012-04-24 15:20:53 +02:00
|
|
|
/* Domains are allowed for historical reasons, but we warn */
|
|
|
|
if (sourcetyptype == TYPTYPE_DOMAIN)
|
|
|
|
ereport(WARNING,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("cast will be ignored because the source data type is a domain")));
|
|
|
|
|
|
|
|
else if (targettyptype == TYPTYPE_DOMAIN)
|
|
|
|
ereport(WARNING,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("cast will be ignored because the target data type is a domain")));
|
|
|
|
|
2017-02-06 10:33:58 +01:00
|
|
|
/* Determine the cast method */
|
2002-07-19 01:11:32 +02:00
|
|
|
if (stmt->func != NULL)
|
2008-10-31 09:39:22 +01:00
|
|
|
castmethod = COERCION_METHOD_FUNCTION;
|
2009-06-11 16:49:15 +02:00
|
|
|
else if (stmt->inout)
|
2008-10-31 09:39:22 +01:00
|
|
|
castmethod = COERCION_METHOD_INOUT;
|
|
|
|
else
|
|
|
|
castmethod = COERCION_METHOD_BINARY;
|
|
|
|
|
|
|
|
if (castmethod == COERCION_METHOD_FUNCTION)
|
2002-07-19 01:11:32 +02:00
|
|
|
{
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
Form_pg_proc procstruct;
|
|
|
|
|
2017-11-30 14:46:13 +01:00
|
|
|
funcid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->func, false);
|
2002-07-19 01:11:32 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcid));
|
2002-07-19 01:11:32 +02:00
|
|
|
if (!HeapTupleIsValid(tuple))
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "cache lookup failed for function %u", funcid);
|
2002-07-19 01:11:32 +02:00
|
|
|
|
|
|
|
procstruct = (Form_pg_proc) GETSTRUCT(tuple);
|
2004-06-16 03:27:00 +02:00
|
|
|
nargs = procstruct->pronargs;
|
|
|
|
if (nargs < 1 || nargs > 3)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("cast function must take one to three arguments")));
|
2008-07-11 09:02:43 +02:00
|
|
|
if (!IsBinaryCoercible(sourcetypeid, procstruct->proargtypes.values[0]))
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2009-06-11 16:49:15 +02:00
|
|
|
errmsg("argument of cast function must match or be binary-coercible from source data type")));
|
2005-03-29 02:17:27 +02:00
|
|
|
if (nargs > 1 && procstruct->proargtypes.values[1] != INT4OID)
|
2004-06-16 03:27:00 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("second argument of cast function must be type %s",
|
|
|
|
"integer")));
|
2005-03-29 02:17:27 +02:00
|
|
|
if (nargs > 2 && procstruct->proargtypes.values[2] != BOOLOID)
|
2004-06-16 03:27:00 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2017-01-18 20:08:20 +01:00
|
|
|
errmsg("third argument of cast function must be type %s",
|
|
|
|
"boolean")));
|
2008-07-11 09:02:43 +02:00
|
|
|
if (!IsBinaryCoercible(procstruct->prorettype, targettypeid))
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2008-07-12 12:44:56 +02:00
|
|
|
errmsg("return data type of cast function must match or be binary-coercible to target data type")));
|
2003-08-04 02:43:34 +02:00
|
|
|
|
2003-02-01 23:09:26 +01:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Restricting the volatility of a cast function may or may not be a
|
|
|
|
* good idea in the abstract, but it definitely breaks many old
|
2014-05-06 18:12:18 +02:00
|
|
|
* user-defined types. Disable this check --- tgl 2/1/03
|
2003-02-01 23:09:26 +01:00
|
|
|
*/
|
|
|
|
#ifdef NOT_USED
|
2002-09-15 15:04:16 +02:00
|
|
|
if (procstruct->provolatile == PROVOLATILE_VOLATILE)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("cast function must not be volatile")));
|
2003-02-01 23:09:26 +01:00
|
|
|
#endif
|
2018-03-02 14:57:38 +01:00
|
|
|
if (procstruct->prokind != PROKIND_FUNCTION)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2018-03-02 14:57:38 +01:00
|
|
|
errmsg("cast function must be a normal function")));
|
2002-07-19 01:11:32 +02:00
|
|
|
if (procstruct->proretset)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("cast function must not return a set")));
|
2002-07-19 01:11:32 +02:00
|
|
|
|
|
|
|
ReleaseSysCache(tuple);
|
|
|
|
}
|
|
|
|
else
|
2008-10-31 09:39:22 +01:00
|
|
|
{
|
|
|
|
funcid = InvalidOid;
|
|
|
|
nargs = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (castmethod == COERCION_METHOD_BINARY)
|
2002-07-19 01:11:32 +02:00
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
int16 typ1len;
|
|
|
|
int16 typ2len;
|
|
|
|
bool typ1byval;
|
|
|
|
bool typ2byval;
|
|
|
|
char typ1align;
|
|
|
|
char typ2align;
|
2002-10-05 00:08:44 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Must be superuser to create binary-compatible casts, since
|
|
|
|
* erroneous casts can easily crash the backend.
|
|
|
|
*/
|
|
|
|
if (!superuser())
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("must be superuser to create a cast WITHOUT FUNCTION")));
|
2002-10-05 00:08:44 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Also, insist that the types match as to size, alignment, and
|
2005-10-15 04:49:52 +02:00
|
|
|
* pass-by-value attributes; this provides at least a crude check that
|
|
|
|
* they have similar representations. A pair of types that fail this
|
|
|
|
* test should certainly not be equated.
|
2002-10-05 00:08:44 +02:00
|
|
|
*/
|
|
|
|
get_typlenbyvalalign(sourcetypeid, &typ1len, &typ1byval, &typ1align);
|
|
|
|
get_typlenbyvalalign(targettypeid, &typ2len, &typ2byval, &typ2align);
|
|
|
|
if (typ1len != typ2len ||
|
|
|
|
typ1byval != typ2byval ||
|
|
|
|
typ1align != typ2align)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2003-09-29 02:05:25 +02:00
|
|
|
errmsg("source and target data types are not physically compatible")));
|
2009-03-04 12:53:53 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We know that composite, enum and array types are never binary-
|
2014-05-06 18:12:18 +02:00
|
|
|
* compatible with each other. They all have OIDs embedded in them.
|
2009-03-04 12:53:53 +01:00
|
|
|
*
|
|
|
|
* Theoretically you could build a user-defined base type that is
|
|
|
|
* binary-compatible with a composite, enum, or array type. But we
|
|
|
|
* disallow that too, as in practice such a cast is surely a mistake.
|
|
|
|
* You can always work around that by writing a cast function.
|
|
|
|
*/
|
|
|
|
if (sourcetyptype == TYPTYPE_COMPOSITE ||
|
|
|
|
targettyptype == TYPTYPE_COMPOSITE)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("composite data types are not binary-compatible")));
|
2009-03-04 12:53:53 +01:00
|
|
|
|
|
|
|
if (sourcetyptype == TYPTYPE_ENUM ||
|
|
|
|
targettyptype == TYPTYPE_ENUM)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("enum data types are not binary-compatible")));
|
|
|
|
|
|
|
|
if (OidIsValid(get_element_type(sourcetypeid)) ||
|
|
|
|
OidIsValid(get_element_type(targettypeid)))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("array data types are not binary-compatible")));
|
Improve handling of domains over arrays.
This patch eliminates various bizarre behaviors caused by sloppy thinking
about the difference between a domain type and its underlying array type.
In particular, the operation of updating one element of such an array
has to be considered as yielding a value of the underlying array type,
*not* a value of the domain, because there's no assurance that the
domain's CHECK constraints are still satisfied. If we're intending to
store the result back into a domain column, we have to re-cast to the
domain type so that constraints are re-checked.
For similar reasons, such a domain can't be blindly matched to an ANYARRAY
polymorphic parameter, because the polymorphic function is likely to apply
array-ish operations that could invalidate the domain constraints. For the
moment, we just forbid such matching. We might later wish to insert an
automatic downcast to the underlying array type, but such a change should
also change matching of domains to ANYELEMENT for consistency.
To ensure that all such logic is rechecked, this patch removes the original
hack of setting a domain's pg_type.typelem field to match its base type;
the typelem will always be zero instead. In those places where it's really
okay to look through the domain type with no other logic changes, use the
newly added get_base_element_type function in place of get_element_type.
catversion bumped due to change in pg_type contents.
Per bug #5717 from Richard Huxton and subsequent discussion.
2010-10-21 22:07:17 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We also disallow creating binary-compatibility casts involving
|
|
|
|
* domains. Casting from a domain to its base type is already
|
|
|
|
* allowed, and casting the other way ought to go through domain
|
2014-05-06 18:12:18 +02:00
|
|
|
* coercion to permit constraint checking. Again, if you're intent on
|
Improve handling of domains over arrays.
This patch eliminates various bizarre behaviors caused by sloppy thinking
about the difference between a domain type and its underlying array type.
In particular, the operation of updating one element of such an array
has to be considered as yielding a value of the underlying array type,
*not* a value of the domain, because there's no assurance that the
domain's CHECK constraints are still satisfied. If we're intending to
store the result back into a domain column, we have to re-cast to the
domain type so that constraints are re-checked.
For similar reasons, such a domain can't be blindly matched to an ANYARRAY
polymorphic parameter, because the polymorphic function is likely to apply
array-ish operations that could invalidate the domain constraints. For the
moment, we just forbid such matching. We might later wish to insert an
automatic downcast to the underlying array type, but such a change should
also change matching of domains to ANYELEMENT for consistency.
To ensure that all such logic is rechecked, this patch removes the original
hack of setting a domain's pg_type.typelem field to match its base type;
the typelem will always be zero instead. In those places where it's really
okay to look through the domain type with no other logic changes, use the
newly added get_base_element_type function in place of get_element_type.
catversion bumped due to change in pg_type contents.
Per bug #5717 from Richard Huxton and subsequent discussion.
2010-10-21 22:07:17 +02:00
|
|
|
* having your own semantics for that, create a no-op cast function.
|
|
|
|
*
|
|
|
|
* NOTE: if we were to relax this, the above checks for composites
|
|
|
|
* etc. would have to be modified to look through domains to their
|
|
|
|
* base types.
|
|
|
|
*/
|
|
|
|
if (sourcetyptype == TYPTYPE_DOMAIN ||
|
|
|
|
targettyptype == TYPTYPE_DOMAIN)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("domain data types must not be marked binary-compatible")));
|
2002-07-19 01:11:32 +02:00
|
|
|
}
|
|
|
|
|
2004-06-16 03:27:00 +02:00
|
|
|
/*
|
|
|
|
* Allow source and target types to be same only for length coercion
|
|
|
|
* functions. We assume a multi-arg function does length coercion.
|
|
|
|
*/
|
|
|
|
if (sourcetypeid == targettypeid && nargs < 2)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("source data type and target data type are the same")));
|
2004-06-16 03:27:00 +02:00
|
|
|
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
/* convert CoercionContext enum to char value for castcontext */
|
|
|
|
switch (stmt->context)
|
|
|
|
{
|
|
|
|
case COERCION_IMPLICIT:
|
|
|
|
castcontext = COERCION_CODE_IMPLICIT;
|
|
|
|
break;
|
|
|
|
case COERCION_ASSIGNMENT:
|
|
|
|
castcontext = COERCION_CODE_ASSIGNMENT;
|
|
|
|
break;
|
|
|
|
case COERCION_EXPLICIT:
|
|
|
|
castcontext = COERCION_CODE_EXPLICIT;
|
|
|
|
break;
|
|
|
|
default:
|
2003-07-19 01:20:33 +02:00
|
|
|
elog(ERROR, "unrecognized CoercionContext: %d", stmt->context);
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
castcontext = 0; /* keep compiler quiet */
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2005-04-14 22:03:27 +02:00
|
|
|
relation = heap_open(CastRelationId, RowExclusiveLock);
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
|
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Check for duplicate. This is just to give a friendly error message,
|
|
|
|
* the unique index would catch it anyway (so no need to sweat about race
|
|
|
|
* conditions).
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
*/
|
2010-02-14 19:42:19 +01:00
|
|
|
tuple = SearchSysCache2(CASTSOURCETARGET,
|
|
|
|
ObjectIdGetDatum(sourcetypeid),
|
|
|
|
ObjectIdGetDatum(targettypeid));
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
if (HeapTupleIsValid(tuple))
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
2003-07-28 02:09:16 +02:00
|
|
|
errmsg("cast from type %s to type %s already exists",
|
2008-10-21 12:38:51 +02:00
|
|
|
format_type_be(sourcetypeid),
|
|
|
|
format_type_be(targettypeid))));
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
|
2002-07-19 01:11:32 +02:00
|
|
|
/* ready to go */
|
2002-09-04 22:31:48 +02:00
|
|
|
values[Anum_pg_cast_castsource - 1] = ObjectIdGetDatum(sourcetypeid);
|
|
|
|
values[Anum_pg_cast_casttarget - 1] = ObjectIdGetDatum(targettypeid);
|
|
|
|
values[Anum_pg_cast_castfunc - 1] = ObjectIdGetDatum(funcid);
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
values[Anum_pg_cast_castcontext - 1] = CharGetDatum(castcontext);
|
2008-10-31 09:39:22 +01:00
|
|
|
values[Anum_pg_cast_castmethod - 1] = CharGetDatum(castmethod);
|
2002-07-19 01:11:32 +02:00
|
|
|
|
2008-11-02 02:45:28 +01:00
|
|
|
MemSet(nulls, false, sizeof(nulls));
|
2002-07-19 01:11:32 +02:00
|
|
|
|
2008-11-02 02:45:28 +01:00
|
|
|
tuple = heap_form_tuple(RelationGetDescr(relation), values, nulls);
|
2002-07-19 01:11:32 +02:00
|
|
|
|
2017-01-31 22:42:24 +01:00
|
|
|
castid = CatalogTupleInsert(relation, tuple);
|
2002-07-19 01:11:32 +02:00
|
|
|
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
/* make dependency entries */
|
2005-04-14 22:03:27 +02:00
|
|
|
myself.classId = CastRelationId;
|
2011-07-23 22:59:39 +02:00
|
|
|
myself.objectId = castid;
|
2002-07-19 01:11:32 +02:00
|
|
|
myself.objectSubId = 0;
|
|
|
|
|
|
|
|
/* dependency on source type */
|
2005-04-14 03:38:22 +02:00
|
|
|
referenced.classId = TypeRelationId;
|
2002-07-19 01:11:32 +02:00
|
|
|
referenced.objectId = sourcetypeid;
|
|
|
|
referenced.objectSubId = 0;
|
|
|
|
recordDependencyOn(&myself, &referenced, DEPENDENCY_NORMAL);
|
|
|
|
|
|
|
|
/* dependency on target type */
|
2005-04-14 03:38:22 +02:00
|
|
|
referenced.classId = TypeRelationId;
|
2002-07-19 01:11:32 +02:00
|
|
|
referenced.objectId = targettypeid;
|
|
|
|
referenced.objectSubId = 0;
|
|
|
|
recordDependencyOn(&myself, &referenced, DEPENDENCY_NORMAL);
|
|
|
|
|
|
|
|
/* dependency on function */
|
|
|
|
if (OidIsValid(funcid))
|
|
|
|
{
|
2005-04-14 03:38:22 +02:00
|
|
|
referenced.classId = ProcedureRelationId;
|
2002-07-19 01:11:32 +02:00
|
|
|
referenced.objectId = funcid;
|
|
|
|
referenced.objectSubId = 0;
|
|
|
|
recordDependencyOn(&myself, &referenced, DEPENDENCY_NORMAL);
|
|
|
|
}
|
|
|
|
|
2011-02-08 22:08:41 +01:00
|
|
|
/* dependency on extension */
|
2011-07-23 22:59:39 +02:00
|
|
|
recordDependencyOnCurrentExtension(&myself, false);
|
2011-02-08 22:08:41 +01:00
|
|
|
|
2010-11-25 17:48:49 +01:00
|
|
|
/* Post creation hook for new cast */
|
2013-03-07 02:52:06 +01:00
|
|
|
InvokeObjectPostCreateHook(CastRelationId, castid, 0);
|
2010-11-25 17:48:49 +01:00
|
|
|
|
2002-07-19 01:11:32 +02:00
|
|
|
heap_freetuple(tuple);
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
|
2002-07-19 01:11:32 +02:00
|
|
|
heap_close(relation, RowExclusiveLock);
|
2012-12-29 13:55:37 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return myself;
|
2002-07-19 01:11:32 +02:00
|
|
|
}
|
|
|
|
|
2010-08-05 17:25:36 +02:00
|
|
|
/*
|
|
|
|
* get_cast_oid - given two type OIDs, look up a cast OID
|
|
|
|
*
|
|
|
|
* If missing_ok is false, throw an error if the cast is not found. If
|
|
|
|
* true, just return InvalidOid.
|
|
|
|
*/
|
|
|
|
Oid
|
|
|
|
get_cast_oid(Oid sourcetypeid, Oid targettypeid, bool missing_ok)
|
|
|
|
{
|
2011-04-10 17:42:00 +02:00
|
|
|
Oid oid;
|
2010-08-05 17:25:36 +02:00
|
|
|
|
|
|
|
oid = GetSysCacheOid2(CASTSOURCETARGET,
|
|
|
|
ObjectIdGetDatum(sourcetypeid),
|
|
|
|
ObjectIdGetDatum(targettypeid));
|
|
|
|
if (!OidIsValid(oid) && !missing_ok)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
|
|
|
errmsg("cast from type %s to type %s does not exist",
|
|
|
|
format_type_be(sourcetypeid),
|
|
|
|
format_type_be(targettypeid))));
|
|
|
|
return oid;
|
|
|
|
}
|
2002-07-19 01:11:32 +02:00
|
|
|
|
|
|
|
void
|
|
|
|
DropCastById(Oid castOid)
|
|
|
|
{
|
2003-09-24 20:54:02 +02:00
|
|
|
Relation relation;
|
2002-07-19 01:11:32 +02:00
|
|
|
ScanKeyData scankey;
|
2003-09-24 20:54:02 +02:00
|
|
|
SysScanDesc scan;
|
2002-07-19 01:11:32 +02:00
|
|
|
HeapTuple tuple;
|
|
|
|
|
2005-04-14 22:03:27 +02:00
|
|
|
relation = heap_open(CastRelationId, RowExclusiveLock);
|
2002-07-22 22:23:19 +02:00
|
|
|
|
2003-11-12 22:15:59 +01:00
|
|
|
ScanKeyInit(&scankey,
|
|
|
|
ObjectIdAttributeNumber,
|
|
|
|
BTEqualStrategyNumber, F_OIDEQ,
|
|
|
|
ObjectIdGetDatum(castOid));
|
2005-04-14 22:03:27 +02:00
|
|
|
scan = systable_beginscan(relation, CastOidIndexId, true,
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
NULL, 1, &scankey);
|
2003-09-24 20:54:02 +02:00
|
|
|
|
|
|
|
tuple = systable_getnext(scan);
|
2003-07-19 01:20:33 +02:00
|
|
|
if (!HeapTupleIsValid(tuple))
|
2002-07-19 01:11:32 +02:00
|
|
|
elog(ERROR, "could not find tuple for cast %u", castOid);
|
2017-02-01 22:13:30 +01:00
|
|
|
CatalogTupleDelete(relation, &tuple->t_self);
|
2002-07-22 22:23:19 +02:00
|
|
|
|
2003-09-24 20:54:02 +02:00
|
|
|
systable_endscan(scan);
|
2002-07-19 01:11:32 +02:00
|
|
|
heap_close(relation, RowExclusiveLock);
|
|
|
|
}
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
static void
|
|
|
|
check_transform_function(Form_pg_proc procstruct)
|
|
|
|
{
|
|
|
|
if (procstruct->provolatile == PROVOLATILE_VOLATILE)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("transform function must not be volatile")));
|
2018-03-02 14:57:38 +01:00
|
|
|
if (procstruct->prokind != PROKIND_FUNCTION)
|
2015-04-26 16:33:14 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2018-03-02 14:57:38 +01:00
|
|
|
errmsg("transform function must be a normal function")));
|
2015-04-26 16:33:14 +02:00
|
|
|
if (procstruct->proretset)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("transform function must not return a set")));
|
|
|
|
if (procstruct->pronargs != 1)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("transform function must take one argument")));
|
|
|
|
if (procstruct->proargtypes.values[0] != INTERNALOID)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("first argument of transform function must be type %s",
|
|
|
|
"internal")));
|
2015-04-26 16:33:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* CREATE TRANSFORM
|
|
|
|
*/
|
2015-06-26 23:17:54 +02:00
|
|
|
ObjectAddress
|
2015-04-26 16:33:14 +02:00
|
|
|
CreateTransform(CreateTransformStmt *stmt)
|
|
|
|
{
|
|
|
|
Oid typeid;
|
|
|
|
char typtype;
|
|
|
|
Oid langid;
|
|
|
|
Oid fromsqlfuncid;
|
|
|
|
Oid tosqlfuncid;
|
|
|
|
AclResult aclresult;
|
|
|
|
Form_pg_proc procstruct;
|
|
|
|
Datum values[Natts_pg_transform];
|
|
|
|
bool nulls[Natts_pg_transform];
|
|
|
|
bool replaces[Natts_pg_transform];
|
|
|
|
Oid transformid;
|
|
|
|
HeapTuple tuple;
|
|
|
|
HeapTuple newtuple;
|
|
|
|
Relation relation;
|
|
|
|
ObjectAddress myself,
|
|
|
|
referenced;
|
|
|
|
bool is_replace;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get the type
|
|
|
|
*/
|
|
|
|
typeid = typenameTypeId(NULL, stmt->type_name);
|
|
|
|
typtype = get_typtype(typeid);
|
|
|
|
|
|
|
|
if (typtype == TYPTYPE_PSEUDO)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("data type %s is a pseudo-type",
|
|
|
|
TypeNameToString(stmt->type_name))));
|
|
|
|
|
|
|
|
if (typtype == TYPTYPE_DOMAIN)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("data type %s is a domain",
|
|
|
|
TypeNameToString(stmt->type_name))));
|
|
|
|
|
|
|
|
if (!pg_type_ownercheck(typeid, GetUserId()))
|
|
|
|
aclcheck_error_type(ACLCHECK_NOT_OWNER, typeid);
|
|
|
|
|
|
|
|
aclresult = pg_type_aclcheck(typeid, GetUserId(), ACL_USAGE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
|
|
|
aclcheck_error_type(aclresult, typeid);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get the language
|
|
|
|
*/
|
|
|
|
langid = get_language_oid(stmt->lang, false);
|
|
|
|
|
|
|
|
aclresult = pg_language_aclcheck(langid, GetUserId(), ACL_USAGE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_LANGUAGE, stmt->lang);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Get the functions
|
|
|
|
*/
|
|
|
|
if (stmt->fromsql)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
fromsqlfuncid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->fromsql, false);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
if (!pg_proc_ownercheck(fromsqlfuncid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(stmt->fromsql->objname));
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
aclresult = pg_proc_aclcheck(fromsqlfuncid, GetUserId(), ACL_EXECUTE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_FUNCTION, NameListToString(stmt->fromsql->objname));
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(fromsqlfuncid));
|
|
|
|
if (!HeapTupleIsValid(tuple))
|
|
|
|
elog(ERROR, "cache lookup failed for function %u", fromsqlfuncid);
|
|
|
|
procstruct = (Form_pg_proc) GETSTRUCT(tuple);
|
|
|
|
if (procstruct->prorettype != INTERNALOID)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("return data type of FROM SQL function must be %s",
|
|
|
|
"internal")));
|
2015-04-26 16:33:14 +02:00
|
|
|
check_transform_function(procstruct);
|
|
|
|
ReleaseSysCache(tuple);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
fromsqlfuncid = InvalidOid;
|
|
|
|
|
|
|
|
if (stmt->tosql)
|
|
|
|
{
|
2017-11-30 14:46:13 +01:00
|
|
|
tosqlfuncid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->tosql, false);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
if (!pg_proc_ownercheck(tosqlfuncid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(stmt->tosql->objname));
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
aclresult = pg_proc_aclcheck(tosqlfuncid, GetUserId(), ACL_EXECUTE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_FUNCTION, NameListToString(stmt->tosql->objname));
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(tosqlfuncid));
|
|
|
|
if (!HeapTupleIsValid(tuple))
|
|
|
|
elog(ERROR, "cache lookup failed for function %u", tosqlfuncid);
|
|
|
|
procstruct = (Form_pg_proc) GETSTRUCT(tuple);
|
|
|
|
if (procstruct->prorettype != typeid)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("return data type of TO SQL function must be the transform data type")));
|
|
|
|
check_transform_function(procstruct);
|
|
|
|
ReleaseSysCache(tuple);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
tosqlfuncid = InvalidOid;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ready to go
|
|
|
|
*/
|
|
|
|
values[Anum_pg_transform_trftype - 1] = ObjectIdGetDatum(typeid);
|
|
|
|
values[Anum_pg_transform_trflang - 1] = ObjectIdGetDatum(langid);
|
|
|
|
values[Anum_pg_transform_trffromsql - 1] = ObjectIdGetDatum(fromsqlfuncid);
|
|
|
|
values[Anum_pg_transform_trftosql - 1] = ObjectIdGetDatum(tosqlfuncid);
|
|
|
|
|
|
|
|
MemSet(nulls, false, sizeof(nulls));
|
|
|
|
|
|
|
|
relation = heap_open(TransformRelationId, RowExclusiveLock);
|
|
|
|
|
|
|
|
tuple = SearchSysCache2(TRFTYPELANG,
|
|
|
|
ObjectIdGetDatum(typeid),
|
|
|
|
ObjectIdGetDatum(langid));
|
|
|
|
if (HeapTupleIsValid(tuple))
|
|
|
|
{
|
|
|
|
if (!stmt->replace)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("transform for type %s language \"%s\" already exists",
|
|
|
|
format_type_be(typeid),
|
|
|
|
stmt->lang)));
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
MemSet(replaces, false, sizeof(replaces));
|
|
|
|
replaces[Anum_pg_transform_trffromsql - 1] = true;
|
|
|
|
replaces[Anum_pg_transform_trftosql - 1] = true;
|
|
|
|
|
|
|
|
newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values, nulls, replaces);
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(relation, &newtuple->t_self, newtuple);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
transformid = HeapTupleGetOid(tuple);
|
|
|
|
ReleaseSysCache(tuple);
|
|
|
|
is_replace = true;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
newtuple = heap_form_tuple(RelationGetDescr(relation), values, nulls);
|
2017-01-31 22:42:24 +01:00
|
|
|
transformid = CatalogTupleInsert(relation, newtuple);
|
2015-04-26 16:33:14 +02:00
|
|
|
is_replace = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (is_replace)
|
|
|
|
deleteDependencyRecordsFor(TransformRelationId, transformid, true);
|
|
|
|
|
|
|
|
/* make dependency entries */
|
|
|
|
myself.classId = TransformRelationId;
|
|
|
|
myself.objectId = transformid;
|
|
|
|
myself.objectSubId = 0;
|
|
|
|
|
|
|
|
/* dependency on language */
|
|
|
|
referenced.classId = LanguageRelationId;
|
|
|
|
referenced.objectId = langid;
|
|
|
|
referenced.objectSubId = 0;
|
|
|
|
recordDependencyOn(&myself, &referenced, DEPENDENCY_NORMAL);
|
|
|
|
|
|
|
|
/* dependency on type */
|
|
|
|
referenced.classId = TypeRelationId;
|
|
|
|
referenced.objectId = typeid;
|
|
|
|
referenced.objectSubId = 0;
|
|
|
|
recordDependencyOn(&myself, &referenced, DEPENDENCY_NORMAL);
|
|
|
|
|
|
|
|
/* dependencies on functions */
|
|
|
|
if (OidIsValid(fromsqlfuncid))
|
|
|
|
{
|
|
|
|
referenced.classId = ProcedureRelationId;
|
|
|
|
referenced.objectId = fromsqlfuncid;
|
|
|
|
referenced.objectSubId = 0;
|
|
|
|
recordDependencyOn(&myself, &referenced, DEPENDENCY_NORMAL);
|
|
|
|
}
|
|
|
|
if (OidIsValid(tosqlfuncid))
|
|
|
|
{
|
|
|
|
referenced.classId = ProcedureRelationId;
|
|
|
|
referenced.objectId = tosqlfuncid;
|
|
|
|
referenced.objectSubId = 0;
|
|
|
|
recordDependencyOn(&myself, &referenced, DEPENDENCY_NORMAL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* dependency on extension */
|
|
|
|
recordDependencyOnCurrentExtension(&myself, is_replace);
|
|
|
|
|
|
|
|
/* Post creation hook for new transform */
|
|
|
|
InvokeObjectPostCreateHook(TransformRelationId, transformid, 0);
|
|
|
|
|
|
|
|
heap_freetuple(newtuple);
|
|
|
|
|
|
|
|
heap_close(relation, RowExclusiveLock);
|
|
|
|
|
2015-06-26 23:17:54 +02:00
|
|
|
return myself;
|
2015-04-26 16:33:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* get_transform_oid - given type OID and language OID, look up a transform OID
|
|
|
|
*
|
|
|
|
* If missing_ok is false, throw an error if the transform is not found. If
|
|
|
|
* true, just return InvalidOid.
|
|
|
|
*/
|
|
|
|
Oid
|
|
|
|
get_transform_oid(Oid type_id, Oid lang_id, bool missing_ok)
|
|
|
|
{
|
|
|
|
Oid oid;
|
|
|
|
|
|
|
|
oid = GetSysCacheOid2(TRFTYPELANG,
|
|
|
|
ObjectIdGetDatum(type_id),
|
|
|
|
ObjectIdGetDatum(lang_id));
|
|
|
|
if (!OidIsValid(oid) && !missing_ok)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("transform for type %s language \"%s\" does not exist",
|
|
|
|
format_type_be(type_id),
|
|
|
|
get_language_name(lang_id, false))));
|
2015-04-26 16:33:14 +02:00
|
|
|
return oid;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
void
|
|
|
|
DropTransformById(Oid transformOid)
|
|
|
|
{
|
|
|
|
Relation relation;
|
|
|
|
ScanKeyData scankey;
|
|
|
|
SysScanDesc scan;
|
|
|
|
HeapTuple tuple;
|
|
|
|
|
|
|
|
relation = heap_open(TransformRelationId, RowExclusiveLock);
|
|
|
|
|
|
|
|
ScanKeyInit(&scankey,
|
|
|
|
ObjectIdAttributeNumber,
|
|
|
|
BTEqualStrategyNumber, F_OIDEQ,
|
|
|
|
ObjectIdGetDatum(transformOid));
|
|
|
|
scan = systable_beginscan(relation, TransformOidIndexId, true,
|
|
|
|
NULL, 1, &scankey);
|
|
|
|
|
|
|
|
tuple = systable_getnext(scan);
|
|
|
|
if (!HeapTupleIsValid(tuple))
|
|
|
|
elog(ERROR, "could not find tuple for transform %u", transformOid);
|
2017-02-01 22:13:30 +01:00
|
|
|
CatalogTupleDelete(relation, &tuple->t_self);
|
2015-04-26 16:33:14 +02:00
|
|
|
|
|
|
|
systable_endscan(scan);
|
|
|
|
heap_close(relation, RowExclusiveLock);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2005-08-01 06:03:59 +02:00
|
|
|
/*
|
2013-01-21 16:06:41 +01:00
|
|
|
* Subroutine for ALTER FUNCTION/AGGREGATE SET SCHEMA/RENAME
|
2006-04-15 19:45:46 +02:00
|
|
|
*
|
Rework order of checks in ALTER / SET SCHEMA
When attempting to move an object into the schema in which it already
was, for most objects classes we were correctly complaining about
exactly that ("object is already in schema"); but for some other object
classes, such as functions, we were instead complaining of a name
collision ("object already exists in schema"). The latter is wrong and
misleading, per complaint from Robert Haas in
CA+TgmoZ0+gNf7RDKRc3u5rHXffP=QjqPZKGxb4BsPz65k7qnHQ@mail.gmail.com
To fix, refactor the way these checks are done. As a bonus, the
resulting code is smaller and can also share some code with Rename
cases.
While at it, remove use of getObjectDescriptionOids() in error messages.
These are normally disallowed because of translatability considerations,
but this one had slipped through since 9.1. (Not sure that this is
worth backpatching, though, as it would create some untranslated
messages in back branches.)
This is loosely based on a patch by KaiGai Kohei, heavily reworked by
me.
2013-01-15 17:23:43 +01:00
|
|
|
* Is there a function with the given name and signature already in the given
|
|
|
|
* namespace? If so, raise an appropriate error message.
|
2005-08-01 06:03:59 +02:00
|
|
|
*/
|
Rework order of checks in ALTER / SET SCHEMA
When attempting to move an object into the schema in which it already
was, for most objects classes we were correctly complaining about
exactly that ("object is already in schema"); but for some other object
classes, such as functions, we were instead complaining of a name
collision ("object already exists in schema"). The latter is wrong and
misleading, per complaint from Robert Haas in
CA+TgmoZ0+gNf7RDKRc3u5rHXffP=QjqPZKGxb4BsPz65k7qnHQ@mail.gmail.com
To fix, refactor the way these checks are done. As a bonus, the
resulting code is smaller and can also share some code with Rename
cases.
While at it, remove use of getObjectDescriptionOids() in error messages.
These are normally disallowed because of translatability considerations,
but this one had slipped through since 9.1. (Not sure that this is
worth backpatching, though, as it would create some untranslated
messages in back branches.)
This is loosely based on a patch by KaiGai Kohei, heavily reworked by
me.
2013-01-15 17:23:43 +01:00
|
|
|
void
|
|
|
|
IsThereFunctionInNamespace(const char *proname, int pronargs,
|
2013-06-13 01:50:37 +02:00
|
|
|
oidvector *proargtypes, Oid nspOid)
|
2011-02-08 22:08:41 +01:00
|
|
|
{
|
2005-08-01 06:03:59 +02:00
|
|
|
/* check for duplicate name (more friendly than unique-index failure) */
|
2010-02-14 19:42:19 +01:00
|
|
|
if (SearchSysCacheExists3(PROCNAMEARGSNSP,
|
Rework order of checks in ALTER / SET SCHEMA
When attempting to move an object into the schema in which it already
was, for most objects classes we were correctly complaining about
exactly that ("object is already in schema"); but for some other object
classes, such as functions, we were instead complaining of a name
collision ("object already exists in schema"). The latter is wrong and
misleading, per complaint from Robert Haas in
CA+TgmoZ0+gNf7RDKRc3u5rHXffP=QjqPZKGxb4BsPz65k7qnHQ@mail.gmail.com
To fix, refactor the way these checks are done. As a bonus, the
resulting code is smaller and can also share some code with Rename
cases.
While at it, remove use of getObjectDescriptionOids() in error messages.
These are normally disallowed because of translatability considerations,
but this one had slipped through since 9.1. (Not sure that this is
worth backpatching, though, as it would create some untranslated
messages in back branches.)
This is loosely based on a patch by KaiGai Kohei, heavily reworked by
me.
2013-01-15 17:23:43 +01:00
|
|
|
CStringGetDatum(proname),
|
2013-06-13 01:50:37 +02:00
|
|
|
PointerGetDatum(proargtypes),
|
2010-02-14 19:42:19 +01:00
|
|
|
ObjectIdGetDatum(nspOid)))
|
2005-08-01 06:03:59 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_FUNCTION),
|
Rework order of checks in ALTER / SET SCHEMA
When attempting to move an object into the schema in which it already
was, for most objects classes we were correctly complaining about
exactly that ("object is already in schema"); but for some other object
classes, such as functions, we were instead complaining of a name
collision ("object already exists in schema"). The latter is wrong and
misleading, per complaint from Robert Haas in
CA+TgmoZ0+gNf7RDKRc3u5rHXffP=QjqPZKGxb4BsPz65k7qnHQ@mail.gmail.com
To fix, refactor the way these checks are done. As a bonus, the
resulting code is smaller and can also share some code with Rename
cases.
While at it, remove use of getObjectDescriptionOids() in error messages.
These are normally disallowed because of translatability considerations,
but this one had slipped through since 9.1. (Not sure that this is
worth backpatching, though, as it would create some untranslated
messages in back branches.)
This is loosely based on a patch by KaiGai Kohei, heavily reworked by
me.
2013-01-15 17:23:43 +01:00
|
|
|
errmsg("function %s already exists in schema \"%s\"",
|
|
|
|
funcname_signature_string(proname, pronargs,
|
2013-06-13 01:50:37 +02:00
|
|
|
NIL, proargtypes->values),
|
2011-02-08 22:08:41 +01:00
|
|
|
get_namespace_name(nspOid))));
|
2005-08-01 06:03:59 +02:00
|
|
|
}
|
2009-09-23 01:43:43 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* ExecuteDoStmt
|
|
|
|
* Execute inline procedural-language code
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
*
|
|
|
|
* See at ExecuteCallStmt() about the atomic argument.
|
2009-09-23 01:43:43 +02:00
|
|
|
*/
|
|
|
|
void
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
ExecuteDoStmt(DoStmt *stmt, bool atomic)
|
2009-09-23 01:43:43 +02:00
|
|
|
{
|
|
|
|
InlineCodeBlock *codeblock = makeNode(InlineCodeBlock);
|
|
|
|
ListCell *arg;
|
|
|
|
DefElem *as_item = NULL;
|
|
|
|
DefElem *language_item = NULL;
|
|
|
|
char *language;
|
|
|
|
Oid laninline;
|
|
|
|
HeapTuple languageTuple;
|
|
|
|
Form_pg_language languageStruct;
|
|
|
|
|
|
|
|
/* Process options we got from gram.y */
|
|
|
|
foreach(arg, stmt->args)
|
|
|
|
{
|
|
|
|
DefElem *defel = (DefElem *) lfirst(arg);
|
|
|
|
|
|
|
|
if (strcmp(defel->defname, "as") == 0)
|
|
|
|
{
|
|
|
|
if (as_item)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("conflicting or redundant options")));
|
|
|
|
as_item = defel;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "language") == 0)
|
|
|
|
{
|
|
|
|
if (language_item)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("conflicting or redundant options")));
|
|
|
|
language_item = defel;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
elog(ERROR, "option \"%s\" not recognized",
|
|
|
|
defel->defname);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (as_item)
|
|
|
|
codeblock->source_text = strVal(as_item->arg);
|
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("no inline code specified")));
|
|
|
|
|
2010-01-26 17:33:40 +01:00
|
|
|
/* if LANGUAGE option wasn't specified, use the default */
|
2009-09-23 01:43:43 +02:00
|
|
|
if (language_item)
|
|
|
|
language = strVal(language_item->arg);
|
|
|
|
else
|
2010-01-26 17:33:40 +01:00
|
|
|
language = "plpgsql";
|
2009-09-23 01:43:43 +02:00
|
|
|
|
|
|
|
/* Look up the language and validate permissions */
|
2011-11-17 20:20:13 +01:00
|
|
|
languageTuple = SearchSysCache1(LANGNAME, PointerGetDatum(language));
|
2009-09-23 01:43:43 +02:00
|
|
|
if (!HeapTupleIsValid(languageTuple))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
2011-11-17 20:20:13 +01:00
|
|
|
errmsg("language \"%s\" does not exist", language),
|
|
|
|
(PLTemplateExists(language) ?
|
2018-04-27 19:42:03 +02:00
|
|
|
errhint("Use CREATE EXTENSION to load the language into the database.") : 0)));
|
2009-09-23 01:43:43 +02:00
|
|
|
|
|
|
|
codeblock->langOid = HeapTupleGetOid(languageTuple);
|
|
|
|
languageStruct = (Form_pg_language) GETSTRUCT(languageTuple);
|
2009-11-06 22:57:57 +01:00
|
|
|
codeblock->langIsTrusted = languageStruct->lanpltrusted;
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
codeblock->atomic = atomic;
|
2009-09-23 01:43:43 +02:00
|
|
|
|
|
|
|
if (languageStruct->lanpltrusted)
|
|
|
|
{
|
|
|
|
/* if trusted language, need USAGE privilege */
|
|
|
|
AclResult aclresult;
|
|
|
|
|
|
|
|
aclresult = pg_language_aclcheck(codeblock->langOid, GetUserId(),
|
|
|
|
ACL_USAGE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_LANGUAGE,
|
2009-09-23 01:43:43 +02:00
|
|
|
NameStr(languageStruct->lanname));
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* if untrusted language, must be superuser */
|
|
|
|
if (!superuser())
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NO_PRIV, OBJECT_LANGUAGE,
|
2009-09-23 01:43:43 +02:00
|
|
|
NameStr(languageStruct->lanname));
|
|
|
|
}
|
|
|
|
|
|
|
|
/* get the handler function's OID */
|
|
|
|
laninline = languageStruct->laninline;
|
|
|
|
if (!OidIsValid(laninline))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("language \"%s\" does not support inline code execution",
|
|
|
|
NameStr(languageStruct->lanname))));
|
2009-09-23 01:43:43 +02:00
|
|
|
|
|
|
|
ReleaseSysCache(languageTuple);
|
|
|
|
|
|
|
|
/* execute the inline handler */
|
|
|
|
OidFunctionCall1(laninline, PointerGetDatum(codeblock));
|
|
|
|
}
|
2017-11-30 14:46:13 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Execute CALL statement
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
*
|
|
|
|
* Inside a top-level CALL statement, transaction-terminating commands such as
|
|
|
|
* COMMIT or a PL-specific equivalent are allowed. The terminology in the SQL
|
|
|
|
* standard is that CALL establishes a non-atomic execution context. Most
|
|
|
|
* other commands establish an atomic execution context, in which transaction
|
|
|
|
* control actions are not allowed. If there are nested executions of CALL,
|
|
|
|
* we want to track the execution context recursively, so that the nested
|
|
|
|
* CALLs can also do transaction control. Note, however, that for example in
|
|
|
|
* CALL -> SELECT -> CALL, the second call cannot do transaction control,
|
|
|
|
* because the SELECT in between establishes an atomic execution context.
|
|
|
|
*
|
|
|
|
* So when ExecuteCallStmt() is called from the top level, we pass in atomic =
|
|
|
|
* false (recall that that means transactions = yes). We then create a
|
|
|
|
* CallContext node with content atomic = false, which is passed in the
|
|
|
|
* fcinfo->context field to the procedure invocation. The language
|
|
|
|
* implementation should then take appropriate measures to allow or prevent
|
|
|
|
* transaction commands based on that information, e.g., call
|
|
|
|
* SPI_connect_ext(SPI_OPT_NONATOMIC). The language should also pass on the
|
|
|
|
* atomic flag to any nested invocations to CALL.
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
*
|
|
|
|
* The expression data structures and execution context that we create
|
|
|
|
* within this function are children of the portalContext of the Portal
|
|
|
|
* that the CALL utility statement runs in. Therefore, any pass-by-ref
|
|
|
|
* values that we're passing to the procedure will survive transaction
|
|
|
|
* commits that might occur inside the procedure.
|
2017-11-30 14:46:13 +01:00
|
|
|
*/
|
|
|
|
void
|
2018-03-14 16:47:21 +01:00
|
|
|
ExecuteCallStmt(CallStmt *stmt, ParamListInfo params, bool atomic, DestReceiver *dest)
|
2017-11-30 14:46:13 +01:00
|
|
|
{
|
|
|
|
ListCell *lc;
|
|
|
|
FuncExpr *fexpr;
|
|
|
|
int nargs;
|
|
|
|
int i;
|
2018-01-26 18:25:44 +01:00
|
|
|
AclResult aclresult;
|
2017-11-30 14:46:13 +01:00
|
|
|
FmgrInfo flinfo;
|
|
|
|
FunctionCallInfoData fcinfo;
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
CallContext *callcontext;
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
EState *estate;
|
|
|
|
ExprContext *econtext;
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
HeapTuple tp;
|
2018-03-14 16:47:21 +01:00
|
|
|
Datum retval;
|
2017-11-30 14:46:13 +01:00
|
|
|
|
2018-02-21 00:03:31 +01:00
|
|
|
fexpr = stmt->funcexpr;
|
|
|
|
Assert(fexpr);
|
2017-11-30 14:46:13 +01:00
|
|
|
|
|
|
|
aclresult = pg_proc_aclcheck(fexpr->funcid, GetUserId(), ACL_EXECUTE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_PROCEDURE, get_func_name(fexpr->funcid));
|
2017-11-30 14:46:13 +01:00
|
|
|
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
/* Prep the context object we'll pass to the procedure */
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
callcontext = makeNode(CallContext);
|
|
|
|
callcontext->atomic = atomic;
|
|
|
|
|
2018-04-13 23:06:28 +02:00
|
|
|
tp = SearchSysCache1(PROCOID, ObjectIdGetDatum(fexpr->funcid));
|
|
|
|
if (!HeapTupleIsValid(tp))
|
|
|
|
elog(ERROR, "cache lookup failed for function %u", fexpr->funcid);
|
|
|
|
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
/*
|
|
|
|
* If proconfig is set we can't allow transaction commands because of the
|
|
|
|
* way the GUC stacking works: The transaction boundary would have to pop
|
|
|
|
* the proconfig setting off the stack. That restriction could be lifted
|
|
|
|
* by redesigning the GUC nesting mechanism a bit.
|
|
|
|
*/
|
2018-03-28 02:13:52 +02:00
|
|
|
if (!heap_attisnull(tp, Anum_pg_proc_proconfig, NULL))
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
callcontext->atomic = true;
|
2018-04-13 23:06:28 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Expand named arguments, defaults, etc.
|
|
|
|
*/
|
|
|
|
fexpr->args = expand_function_arguments(fexpr->args, fexpr->funcresulttype, tp);
|
|
|
|
nargs = list_length(fexpr->args);
|
|
|
|
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
ReleaseSysCache(tp);
|
|
|
|
|
2018-04-13 23:06:28 +02:00
|
|
|
/* safety check; see ExecInitFunc() */
|
|
|
|
if (nargs > FUNC_MAX_ARGS)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_TOO_MANY_ARGUMENTS),
|
|
|
|
errmsg_plural("cannot pass more than %d argument to a procedure",
|
|
|
|
"cannot pass more than %d arguments to a procedure",
|
|
|
|
FUNC_MAX_ARGS,
|
|
|
|
FUNC_MAX_ARGS)));
|
|
|
|
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
/* Initialize function call structure */
|
|
|
|
InvokeFunctionExecuteHook(fexpr->funcid);
|
2017-11-30 14:46:13 +01:00
|
|
|
fmgr_info(fexpr->funcid, &flinfo);
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
InitFunctionCallInfoData(fcinfo, &flinfo, nargs, fexpr->inputcollid, (Node *) callcontext, NULL);
|
2017-11-30 14:46:13 +01:00
|
|
|
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
/*
|
|
|
|
* Evaluate procedure arguments inside a suitable execution context. Note
|
|
|
|
* we can't free this context till the procedure returns.
|
|
|
|
*/
|
|
|
|
estate = CreateExecutorState();
|
2018-02-21 00:03:31 +01:00
|
|
|
estate->es_param_list_info = params;
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
econtext = CreateExprContext(estate);
|
|
|
|
|
2017-11-30 14:46:13 +01:00
|
|
|
i = 0;
|
2018-01-26 18:25:44 +01:00
|
|
|
foreach(lc, fexpr->args)
|
2017-11-30 14:46:13 +01:00
|
|
|
{
|
|
|
|
ExprState *exprstate;
|
|
|
|
Datum val;
|
|
|
|
bool isnull;
|
|
|
|
|
|
|
|
exprstate = ExecPrepareExpr(lfirst(lc), estate);
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
|
2017-11-30 14:46:13 +01:00
|
|
|
val = ExecEvalExprSwitchContext(exprstate, econtext, &isnull);
|
|
|
|
|
|
|
|
fcinfo.arg[i] = val;
|
|
|
|
fcinfo.argnull[i] = isnull;
|
|
|
|
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
|
2018-03-14 16:47:21 +01:00
|
|
|
retval = FunctionCallInvoke(&fcinfo);
|
|
|
|
|
|
|
|
if (fexpr->funcresulttype == VOIDOID)
|
|
|
|
{
|
|
|
|
/* do nothing */
|
|
|
|
}
|
|
|
|
else if (fexpr->funcresulttype == RECORDOID)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* send tuple to client
|
|
|
|
*/
|
|
|
|
|
|
|
|
HeapTupleHeader td;
|
|
|
|
Oid tupType;
|
|
|
|
int32 tupTypmod;
|
|
|
|
TupleDesc retdesc;
|
|
|
|
HeapTupleData rettupdata;
|
|
|
|
TupOutputState *tstate;
|
|
|
|
TupleTableSlot *slot;
|
|
|
|
|
|
|
|
if (fcinfo.isnull)
|
|
|
|
elog(ERROR, "procedure returned null record");
|
|
|
|
|
|
|
|
td = DatumGetHeapTupleHeader(retval);
|
|
|
|
tupType = HeapTupleHeaderGetTypeId(td);
|
|
|
|
tupTypmod = HeapTupleHeaderGetTypMod(td);
|
|
|
|
retdesc = lookup_rowtype_tupdesc(tupType, tupTypmod);
|
|
|
|
|
|
|
|
tstate = begin_tup_output_tupdesc(dest, retdesc);
|
|
|
|
|
|
|
|
rettupdata.t_len = HeapTupleHeaderGetDatumLength(td);
|
|
|
|
ItemPointerSetInvalid(&(rettupdata.t_self));
|
|
|
|
rettupdata.t_tableOid = InvalidOid;
|
|
|
|
rettupdata.t_data = td;
|
|
|
|
|
|
|
|
slot = ExecStoreTuple(&rettupdata, tstate->slot, InvalidBuffer, false);
|
|
|
|
tstate->dest->receiveSlot(slot, tstate->dest);
|
|
|
|
|
|
|
|
end_tup_output(tstate);
|
|
|
|
|
|
|
|
ReleaseTupleDesc(retdesc);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
elog(ERROR, "unexpected result type for procedure: %u",
|
|
|
|
fexpr->funcresulttype);
|
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 19:37:12 +01:00
|
|
|
|
|
|
|
FreeExecutorState(estate);
|
2017-11-30 14:46:13 +01:00
|
|
|
}
|