1997-11-25 23:07:18 +01:00
/*-------------------------------------------------------------------------
*
* parse_func . c
* handle function calls in parser
*
2018-01-03 05:30:12 +01:00
* Portions Copyright ( c ) 1996 - 2018 , PostgreSQL Global Development Group
2000-01-26 06:58:53 +01:00
* Portions Copyright ( c ) 1994 , Regents of the University of California
1997-11-25 23:07:18 +01:00
*
*
* IDENTIFICATION
2010-09-20 22:08:53 +02:00
* src / backend / parser / parse_func . c
1997-11-25 23:07:18 +01:00
*
* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
*/
# include "postgres.h"
1998-04-27 06:08:07 +02:00
2012-08-30 22:15:44 +02:00
# include "access/htup_details.h"
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
# include "catalog/pg_aggregate.h"
1997-11-25 23:07:18 +01:00
# include "catalog/pg_proc.h"
2006-07-13 18:49:20 +02:00
# include "catalog/pg_type.h"
2005-04-01 00:46:33 +02:00
# include "funcapi.h"
2012-08-29 01:02:00 +02:00
# include "lib/stringinfo.h"
1997-11-25 23:07:18 +01:00
# include "nodes/makefuncs.h"
2008-08-26 00:42:34 +02:00
# include "nodes/nodeFuncs.h"
2003-06-06 17:04:03 +02:00
# include "parser/parse_agg.h"
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
# include "parser/parse_clause.h"
1999-07-16 07:00:38 +02:00
# include "parser/parse_coerce.h"
2016-09-13 19:54:24 +02:00
# include "parser/parse_expr.h"
1997-11-25 23:07:18 +01:00
# include "parser/parse_func.h"
# include "parser/parse_relation.h"
2005-05-31 03:03:23 +02:00
# include "parser/parse_target.h"
2000-06-15 05:33:12 +02:00
# include "parser/parse_type.h"
2001-08-09 20:28:18 +02:00
# include "utils/builtins.h"
1997-11-25 23:07:18 +01:00
# include "utils/lsyscache.h"
# include "utils/syscache.h"
2002-04-11 22:00:18 +02:00
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
static void unify_hypothetical_args ( ParseState * pstate ,
List * fargs , int numAggregatedArgs ,
Oid * actual_arg_types , Oid * declared_arg_types ) ;
2007-11-11 20:22:49 +01:00
static Oid FuncNameAsType ( List * funcname ) ;
2017-10-31 15:34:31 +01:00
static Node * ParseComplexProjection ( ParseState * pstate , const char * funcname ,
2006-03-14 23:48:25 +01:00
Node * first_arg , int location ) ;
1997-11-26 04:43:18 +01:00
1997-11-25 23:07:18 +01:00
/*
2002-03-21 17:02:16 +01:00
* Parse a function call
New comment. This func/column things has always confused me.
/*
* parse function
* This code is confusing because the database can accept
* relation.column, column.function, or relation.column.function.
* In these cases, funcname is the last parameter, and fargs are
* the rest.
*
* It can also be called as func(col) or func(col,col).
* In this case, Funcname is the part before parens, and fargs
* are the part in parens.
*
*/
Node *
ParseFuncOrColumn(ParseState *pstate, char *funcname, List *fargs,
bool agg_star, bool agg_distinct,
int precedence)
2001-05-19 02:33:20 +02:00
*
2002-03-21 17:02:16 +01:00
* For historical reasons , Postgres tries to treat the notations tab . col
* and col ( tab ) as equivalent : if a single - argument function call has an
2002-04-09 22:35:55 +02:00
* argument of complex type and the ( unqualified ) function name matches
2004-04-02 21:07:02 +02:00
* any attribute of the type , we take it as a column projection . Conversely
* a function of a single complex - type argument can be written like a
* column reference , allowing functions to act like computed columns .
2001-05-19 00:54:23 +02:00
*
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
* Hence , both cases come through here . If fn is null , we ' re dealing with
* column syntax not function syntax , but in principle that should not
* affect the lookup behavior , only which error messages we deliver .
* The FuncCall struct is needed however to carry various decoration that
* applies to aggregate and window functions .
*
* Also , when fn is null , we return NULL on failure rather than
2009-10-31 02:41:31 +01:00
* reporting a no - such - function error .
2001-05-19 03:57:11 +02:00
*
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
* The argument expressions ( in fargs ) must have been transformed
* already . However , nothing in * fn has been transformed .
Disallow set-returning functions inside CASE or COALESCE.
When we reimplemented SRFs in commit 69f4b9c85, our initial choice was
to allow the behavior to vary from historical practice in cases where a
SRF call appeared within a conditional-execution construct (currently,
only CASE or COALESCE). But that was controversial to begin with, and
subsequent discussion has resulted in a consensus that it's better to
throw an error instead of executing the query differently from before,
so long as we can provide a reasonably clear error message and a way to
rewrite the query.
Hence, add a parser mechanism to allow detection of such cases during
parse analysis. The mechanism just requires storing, in the ParseState,
a pointer to the set-returning FuncExpr or OpExpr most recently emitted
by parse analysis. Then the parsing functions for CASE and COALESCE can
detect the presence of a SRF in their arguments by noting whether this
pointer changes while analyzing their arguments. Furthermore, if it does,
it provides a suitable error cursor location for the complaint. (This
means that if there's more than one SRF in the arguments, the error will
point at the last one to be analyzed not the first. While connoisseurs of
parsing behavior might find that odd, it's unlikely the average user would
ever notice.)
While at it, we can also provide more specific error messages than before
about some pre-existing restrictions, such as no-SRFs-within-aggregates.
Also, reject at parse time cases where a NULLIF or IS DISTINCT FROM
construct would need to return a set. We've never supported that, but the
restriction is depended on in more subtle ways now, so it seems wise to
detect it at the start.
Also, provide some documentation about how to rewrite a SRF-within-CASE
query using a custom wrapper SRF.
It turns out that the information_schema.user_mapping_options view
contained an instance of exactly the behavior we're now forbidding; but
rewriting it makes it more clear and safer too.
initdb forced because of user_mapping_options change.
Patch by me, with error message suggestions from Alvaro Herrera and
Andres Freund, pursuant to a complaint from Regina Obe.
Discussion: https://postgr.es/m/000001d2d5de$d8d66170$8a832450$@pcorp.us
2017-06-14 05:46:39 +02:00
*
* last_srf should be a copy of pstate - > p_last_srf from just before we
* started transforming fargs . If the caller knows that fargs couldn ' t
* contain any SRF calls , last_srf can just be pstate - > p_last_srf .
1998-01-20 06:05:08 +01:00
*/
1997-11-25 23:07:18 +01:00
Node *
2002-04-09 22:35:55 +02:00
ParseFuncOrColumn ( ParseState * pstate , List * funcname , List * fargs ,
2017-11-30 14:46:13 +01:00
Node * last_srf , FuncCall * fn , bool proc_call , int location )
1997-11-25 23:07:18 +01:00
{
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
bool is_column = ( fn = = NULL ) ;
List * agg_order = ( fn ? fn - > agg_order : NIL ) ;
Expr * agg_filter = NULL ;
bool agg_within_group = ( fn ? fn - > agg_within_group : false ) ;
bool agg_star = ( fn ? fn - > agg_star : false ) ;
bool agg_distinct = ( fn ? fn - > agg_distinct : false ) ;
bool func_variadic = ( fn ? fn - > func_variadic : false ) ;
WindowDef * over = ( fn ? fn - > over : NULL ) ;
2002-03-21 17:02:16 +01:00
Oid rettype ;
Oid funcid ;
2004-05-26 06:41:50 +02:00
ListCell * l ;
2005-06-22 17:19:43 +02:00
ListCell * nextl ;
1997-11-25 23:07:18 +01:00
Node * first_arg = NULL ;
2005-06-22 17:19:43 +02:00
int nargs ;
2008-12-18 19:20:35 +01:00
int nargsplusdefs ;
2003-04-09 01:20:04 +02:00
Oid actual_arg_types [ FUNC_MAX_ARGS ] ;
Oid * declared_arg_types ;
2009-10-08 04:39:25 +02:00
List * argnames ;
2008-12-18 19:20:35 +01:00
List * argdefaults ;
1997-11-25 23:07:18 +01:00
Node * retval ;
bool retset ;
2008-07-16 03:30:23 +02:00
int nvargs ;
2013-07-18 17:52:12 +02:00
Oid vatype ;
2002-03-21 17:02:16 +01:00
FuncDetailCode fdresult ;
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
char aggkind = 0 ;
2015-03-18 18:48:02 +01:00
ParseCallbackState pcbstate ;
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
/*
* If there ' s an aggregate filter , transform it using transformWhereClause
*/
if ( fn & & fn - > agg_filter ! = NULL )
agg_filter = ( Expr * ) transformWhereClause ( pstate , fn - > agg_filter ,
EXPR_KIND_FILTER ,
" FILTER " ) ;
1997-11-25 23:07:18 +01:00
2001-04-19 00:25:31 +02:00
/*
2005-10-15 04:49:52 +02:00
* Most of the rest of the parser just assumes that functions do not have
2014-05-06 18:12:18 +02:00
* more than FUNC_MAX_ARGS parameters . We have to test here to protect
2005-10-15 04:49:52 +02:00
* against array overruns , etc . Of course , this may not be a function ,
* but the test doesn ' t hurt .
2001-04-19 00:25:31 +02:00
*/
2005-06-22 17:19:43 +02:00
if ( list_length ( fargs ) > FUNC_MAX_ARGS )
2003-07-19 01:20:33 +02:00
ereport ( ERROR ,
( errcode ( ERRCODE_TOO_MANY_ARGUMENTS ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
errmsg_plural ( " cannot pass more than %d argument to a function " ,
" cannot pass more than %d arguments to a function " ,
FUNC_MAX_ARGS ,
FUNC_MAX_ARGS ) ,
2006-03-14 23:48:25 +01:00
parser_errposition ( pstate , location ) ) ) ;
2001-04-19 00:25:31 +02:00
2005-06-22 17:19:43 +02:00
/*
* Extract arg type info in preparation for function lookup .
*
2005-11-22 19:17:34 +01:00
* If any arguments are Param markers of type VOID , we discard them from
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
* the parameter list . This is a hack to allow the JDBC driver to not have
* to distinguish " input " and " output " parameter symbols while parsing
* function - call constructs . Don ' t do this if dealing with column syntax ,
* nor if we had WITHIN GROUP ( because in that case it ' s critical to keep
* the argument count unchanged ) . We can ' t use foreach ( ) because we may
* modify the list . . .
2005-06-22 17:19:43 +02:00
*/
nargs = 0 ;
for ( l = list_head ( fargs ) ; l ! = NULL ; l = nextl )
{
Node * arg = lfirst ( l ) ;
Oid argtype = exprType ( arg ) ;
nextl = lnext ( l ) ;
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
if ( argtype = = VOIDOID & & IsA ( arg , Param ) & &
! is_column & & ! agg_within_group )
2005-06-22 17:19:43 +02:00
{
fargs = list_delete_ptr ( fargs , arg ) ;
continue ;
}
actual_arg_types [ nargs + + ] = argtype ;
}
2009-10-08 04:39:25 +02:00
/*
* Check for named arguments ; if there are any , build a list of names .
*
* We allow mixed notation ( some named and some not ) , but only with all
* the named parameters after all the unnamed ones . So the name list
2010-02-26 03:01:40 +01:00
* corresponds to the last N actual parameters and we don ' t need any extra
* bookkeeping to match things up .
2009-10-08 04:39:25 +02:00
*/
argnames = NIL ;
foreach ( l , fargs )
{
2010-02-26 03:01:40 +01:00
Node * arg = lfirst ( l ) ;
2009-10-08 04:39:25 +02:00
if ( IsA ( arg , NamedArgExpr ) )
{
NamedArgExpr * na = ( NamedArgExpr * ) arg ;
ListCell * lc ;
/* Reject duplicate arg names */
foreach ( lc , argnames )
{
if ( strcmp ( na - > name , ( char * ) lfirst ( lc ) ) = = 0 )
ereport ( ERROR ,
( errcode ( ERRCODE_SYNTAX_ERROR ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
errmsg ( " argument name \" %s \" used more than once " ,
na - > name ) ,
2009-10-08 04:39:25 +02:00
parser_errposition ( pstate , na - > location ) ) ) ;
}
argnames = lappend ( argnames , na - > name ) ;
}
else
{
if ( argnames ! = NIL )
ereport ( ERROR ,
( errcode ( ERRCODE_SYNTAX_ERROR ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
errmsg ( " positional argument cannot follow named argument " ) ,
2009-10-08 04:39:25 +02:00
parser_errposition ( pstate , exprLocation ( arg ) ) ) ) ;
}
}
1997-11-25 23:07:18 +01:00
if ( fargs )
{
2004-05-26 06:41:50 +02:00
first_arg = linitial ( fargs ) ;
2003-07-19 01:20:33 +02:00
Assert ( first_arg ! = NULL ) ;
1997-11-25 23:07:18 +01:00
}
/*
2004-04-02 21:07:02 +02:00
* Check for column projection : if function has one argument , and that
2005-10-15 04:49:52 +02:00
* argument is of complex type , and function name is not qualified , then
* the " function call " could be a projection . We also check that there
2009-10-08 04:39:25 +02:00
* wasn ' t any aggregate or variadic decoration , nor an argument name .
1997-11-25 23:07:18 +01:00
*/
2013-07-17 02:15:36 +02:00
if ( nargs = = 1 & & agg_order = = NIL & & agg_filter = = NULL & & ! agg_star & &
! agg_distinct & & over = = NULL & & ! func_variadic & & argnames = = NIL & &
2009-12-15 18:57:48 +01:00
list_length ( funcname ) = = 1 )
1997-11-25 23:07:18 +01:00
{
2005-06-22 17:19:43 +02:00
Oid argtype = actual_arg_types [ 0 ] ;
2002-04-09 22:35:55 +02:00
2004-04-02 21:07:02 +02:00
if ( argtype = = RECORDOID | | ISCOMPLEX ( argtype ) )
1997-11-25 23:07:18 +01:00
{
2004-04-02 21:07:02 +02:00
retval = ParseComplexProjection ( pstate ,
2004-05-26 06:41:50 +02:00
strVal ( linitial ( funcname ) ) ,
2006-03-14 23:48:25 +01:00
first_arg ,
location ) ;
2000-09-12 23:07:18 +02:00
if ( retval )
return retval ;
2004-08-29 07:07:03 +02:00
1997-11-25 23:07:18 +01:00
/*
2005-10-15 04:49:52 +02:00
* If ParseComplexProjection doesn ' t recognize it as a projection ,
* just press on .
1997-11-25 23:07:18 +01:00
*/
}
1999-12-10 08:37:35 +01:00
}
1999-05-25 18:15:34 +02:00
1997-11-25 23:07:18 +01:00
/*
2005-06-22 17:19:43 +02:00
* Okay , it ' s not a column projection , so it must really be a function .
2002-03-21 17:02:16 +01:00
* func_get_detail looks up the function in the catalogs , does
2002-09-04 22:31:48 +02:00
* disambiguation for polymorphic functions , handles inheritance , and
* returns the funcid and type and set or singleton status of the
2008-07-16 03:30:23 +02:00
* function ' s return value . It also returns the true argument types to
2009-10-08 04:39:25 +02:00
* the function .
*
* Note : for a named - notation or variadic function call , the reported
* " true " types aren ' t really what is in pg_proc : the types are reordered
* to match the given argument order of named arguments , and a variadic
* argument is replaced by a suitable number of copies of its element
* type . We ' ll fix up the variadic case below . We may also have to deal
* with default arguments .
1997-11-25 23:07:18 +01:00
*/
2015-03-18 18:48:02 +01:00
setup_parser_errposition_callback ( & pcbstate , pstate , location ) ;
2009-10-08 04:39:25 +02:00
fdresult = func_get_detail ( funcname , fargs , argnames , nargs ,
actual_arg_types ,
2008-12-18 19:20:35 +01:00
! func_variadic , true ,
2013-07-18 17:52:12 +02:00
& funcid , & rettype , & retset ,
& nvargs , & vatype ,
2008-12-04 18:51:28 +01:00
& declared_arg_types , & argdefaults ) ;
2015-03-18 18:48:02 +01:00
cancel_parser_errposition_callback ( & pcbstate ) ;
2002-03-21 17:02:16 +01:00
if ( fdresult = = FUNCDETAIL_COERCION )
{
1997-11-25 23:07:18 +01:00
/*
2007-06-05 23:31:09 +02:00
* We interpreted it as a type coercion . coerce_type can handle these
2005-10-15 04:49:52 +02:00
* cases , so why duplicate code . . .
1997-11-25 23:07:18 +01:00
*/
2004-05-26 06:41:50 +02:00
return coerce_type ( pstate , linitial ( fargs ) ,
2004-06-16 03:27:00 +02:00
actual_arg_types [ 0 ] , rettype , - 1 ,
2008-08-29 01:09:48 +02:00
COERCION_EXPLICIT , COERCE_EXPLICIT_CALL , location ) ;
1997-11-25 23:07:18 +01:00
}
2017-11-30 14:46:13 +01:00
else if ( fdresult = = FUNCDETAIL_NORMAL | | fdresult = = FUNCDETAIL_PROCEDURE )
2002-04-11 22:00:18 +02:00
{
/*
2005-10-15 04:49:52 +02:00
* Normal function found ; was there anything indicating it must be an
* aggregate ?
2002-04-11 22:00:18 +02:00
*/
if ( agg_star )
2003-07-19 01:20:33 +02:00
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
errmsg ( " %s(*) specified, but %s is not an aggregate function " ,
NameListToString ( funcname ) ,
NameListToString ( funcname ) ) ,
2006-03-14 23:48:25 +01:00
parser_errposition ( pstate , location ) ) ) ;
2002-04-11 22:00:18 +02:00
if ( agg_distinct )
2003-07-19 01:20:33 +02:00
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
errmsg ( " DISTINCT specified, but %s is not an aggregate function " ,
NameListToString ( funcname ) ) ,
2006-03-14 23:48:25 +01:00
parser_errposition ( pstate , location ) ) ) ;
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
if ( agg_within_group )
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
errmsg ( " WITHIN GROUP specified, but %s is not an aggregate function " ,
NameListToString ( funcname ) ) ,
parser_errposition ( pstate , location ) ) ) ;
2009-12-15 18:57:48 +01:00
if ( agg_order ! = NIL )
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
errmsg ( " ORDER BY specified, but %s is not an aggregate function " ,
NameListToString ( funcname ) ) ,
2009-12-15 18:57:48 +01:00
parser_errposition ( pstate , location ) ) ) ;
2013-07-17 02:15:36 +02:00
if ( agg_filter )
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
errmsg ( " FILTER specified, but %s is not an aggregate function " ,
NameListToString ( funcname ) ) ,
2013-07-17 02:15:36 +02:00
parser_errposition ( pstate , location ) ) ) ;
2008-12-28 19:54:01 +01:00
if ( over )
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
errmsg ( " OVER specified, but %s is not a window function nor an aggregate function " ,
2009-06-11 16:49:15 +02:00
NameListToString ( funcname ) ) ,
2008-12-28 19:54:01 +01:00
parser_errposition ( pstate , location ) ) ) ;
2017-11-30 14:46:13 +01:00
if ( fdresult = = FUNCDETAIL_NORMAL & & proc_call )
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " %s is not a procedure " ,
func_signature_string ( funcname , nargs ,
argnames ,
actual_arg_types ) ) ,
errhint ( " To call a function, use SELECT. " ) ,
parser_errposition ( pstate , location ) ) ) ;
if ( fdresult = = FUNCDETAIL_PROCEDURE & & ! proc_call )
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " %s is a procedure " ,
func_signature_string ( funcname , nargs ,
argnames ,
actual_arg_types ) ) ,
errhint ( " To call a procedure, use CALL. " ) ,
parser_errposition ( pstate , location ) ) ) ;
2002-04-11 22:00:18 +02:00
}
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
else if ( fdresult = = FUNCDETAIL_AGGREGATE )
{
/*
* It ' s an aggregate ; fetch needed info from the pg_aggregate entry .
*/
HeapTuple tup ;
Form_pg_aggregate classForm ;
int catDirectArgs ;
2017-12-13 16:37:48 +01:00
if ( proc_call )
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " %s is not a procedure " ,
func_signature_string ( funcname , nargs ,
argnames ,
actual_arg_types ) ) ,
parser_errposition ( pstate , location ) ) ) ;
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
tup = SearchSysCache1 ( AGGFNOID , ObjectIdGetDatum ( funcid ) ) ;
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
if ( ! HeapTupleIsValid ( tup ) ) /* should not happen */
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
elog ( ERROR , " cache lookup failed for aggregate %u " , funcid ) ;
classForm = ( Form_pg_aggregate ) GETSTRUCT ( tup ) ;
aggkind = classForm - > aggkind ;
catDirectArgs = classForm - > aggnumdirectargs ;
ReleaseSysCache ( tup ) ;
/* Now check various disallowed cases. */
if ( AGGKIND_IS_ORDERED_SET ( aggkind ) )
{
int numAggregatedArgs ;
int numDirectArgs ;
if ( ! agg_within_group )
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
errmsg ( " WITHIN GROUP is required for ordered-set aggregate %s " ,
NameListToString ( funcname ) ) ,
parser_errposition ( pstate , location ) ) ) ;
if ( over )
ereport ( ERROR ,
( errcode ( ERRCODE_FEATURE_NOT_SUPPORTED ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
errmsg ( " OVER is not supported for ordered-set aggregate %s " ,
NameListToString ( funcname ) ) ,
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
parser_errposition ( pstate , location ) ) ) ;
/* gram.y rejects DISTINCT + WITHIN GROUP */
Assert ( ! agg_distinct ) ;
/* gram.y rejects VARIADIC + WITHIN GROUP */
Assert ( ! func_variadic ) ;
/*
* Since func_get_detail was working with an undifferentiated list
* of arguments , it might have selected an aggregate that doesn ' t
* really match because it requires a different division of direct
* and aggregated arguments . Check that the number of direct
* arguments is actually OK ; if not , throw an " undefined function "
* error , similarly to the case where a misplaced ORDER BY is used
* in a regular aggregate call .
*/
numAggregatedArgs = list_length ( agg_order ) ;
numDirectArgs = nargs - numAggregatedArgs ;
Assert ( numDirectArgs > = 0 ) ;
if ( ! OidIsValid ( vatype ) )
{
/* Test is simple if aggregate isn't variadic */
if ( numDirectArgs ! = catDirectArgs )
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " function %s does not exist " ,
func_signature_string ( funcname , nargs ,
argnames ,
actual_arg_types ) ) ,
errhint ( " There is an ordered-set aggregate %s, but it requires %d direct arguments, not %d. " ,
NameListToString ( funcname ) ,
catDirectArgs , numDirectArgs ) ,
parser_errposition ( pstate , location ) ) ) ;
}
else
{
/*
* If it ' s variadic , we have two cases depending on whether
* the agg was " ... ORDER BY VARIADIC " or " ..., VARIADIC ORDER
* BY VARIADIC " . It's the latter if catDirectArgs equals
* pronargs ; to save a catalog lookup , we reverse - engineer
* pronargs from the info we got from func_get_detail .
*/
int pronargs ;
pronargs = nargs ;
if ( nvargs > 1 )
pronargs - = nvargs - 1 ;
if ( catDirectArgs < pronargs )
{
/* VARIADIC isn't part of direct args, so still easy */
if ( numDirectArgs ! = catDirectArgs )
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " function %s does not exist " ,
func_signature_string ( funcname , nargs ,
argnames ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
actual_arg_types ) ) ,
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
errhint ( " There is an ordered-set aggregate %s, but it requires %d direct arguments, not %d. " ,
NameListToString ( funcname ) ,
catDirectArgs , numDirectArgs ) ,
parser_errposition ( pstate , location ) ) ) ;
}
else
{
/*
* Both direct and aggregated args were declared variadic .
* For a standard ordered - set aggregate , it ' s okay as long
* as there aren ' t too few direct args . For a
* hypothetical - set aggregate , we assume that the
* hypothetical arguments are those that matched the
* variadic parameter ; there must be just as many of them
* as there are aggregated arguments .
*/
if ( aggkind = = AGGKIND_HYPOTHETICAL )
{
if ( nvargs ! = 2 * numAggregatedArgs )
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " function %s does not exist " ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
func_signature_string ( funcname , nargs ,
argnames ,
actual_arg_types ) ) ,
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
errhint ( " To use the hypothetical-set aggregate %s, the number of hypothetical direct arguments (here %d) must match the number of ordering columns (here %d). " ,
NameListToString ( funcname ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
nvargs - numAggregatedArgs , numAggregatedArgs ) ,
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
parser_errposition ( pstate , location ) ) ) ;
}
else
{
if ( nvargs < = numAggregatedArgs )
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " function %s does not exist " ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
func_signature_string ( funcname , nargs ,
argnames ,
actual_arg_types ) ) ,
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
errhint ( " There is an ordered-set aggregate %s, but it requires at least %d direct arguments. " ,
NameListToString ( funcname ) ,
catDirectArgs ) ,
parser_errposition ( pstate , location ) ) ) ;
}
}
}
/* Check type matching of hypothetical arguments */
if ( aggkind = = AGGKIND_HYPOTHETICAL )
unify_hypothetical_args ( pstate , fargs , numAggregatedArgs ,
actual_arg_types , declared_arg_types ) ;
}
else
{
/* Normal aggregate, so it can't have WITHIN GROUP */
if ( agg_within_group )
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
errmsg ( " %s is not an ordered-set aggregate, so it cannot have WITHIN GROUP " ,
NameListToString ( funcname ) ) ,
parser_errposition ( pstate , location ) ) ) ;
}
}
else if ( fdresult = = FUNCDETAIL_WINDOWFUNC )
{
/*
* True window functions must be called with a window definition .
*/
if ( ! over )
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
errmsg ( " window function %s requires an OVER clause " ,
NameListToString ( funcname ) ) ,
parser_errposition ( pstate , location ) ) ) ;
/* And, per spec, WITHIN GROUP isn't allowed */
if ( agg_within_group )
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
errmsg ( " window function %s cannot have WITHIN GROUP " ,
NameListToString ( funcname ) ) ,
parser_errposition ( pstate , location ) ) ) ;
}
else
1997-11-25 23:07:18 +01:00
{
2001-10-05 00:06:46 +02:00
/*
2002-03-21 17:02:16 +01:00
* Oops . Time to die .
*
2010-02-26 03:01:40 +01:00
* If we are dealing with the attribute notation rel . function , let the
* caller handle failure .
2001-10-05 00:06:46 +02:00
*/
2002-03-21 17:02:16 +01:00
if ( is_column )
2009-10-31 02:41:31 +01:00
return NULL ;
2002-09-04 22:31:48 +02:00
2002-05-18 00:35:13 +02:00
/*
* Else generate a detailed complaint for a function
*/
2003-07-04 04:51:34 +02:00
if ( fdresult = = FUNCDETAIL_MULTIPLE )
ereport ( ERROR ,
( errcode ( ERRCODE_AMBIGUOUS_FUNCTION ) ,
errmsg ( " function %s is not unique " ,
2009-10-08 04:39:25 +02:00
func_signature_string ( funcname , nargs , argnames ,
2003-07-04 04:51:34 +02:00
actual_arg_types ) ) ,
2005-10-15 04:49:52 +02:00
errhint ( " Could not choose a best candidate function. "
Wording cleanup for error messages. Also change can't -> cannot.
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
2007-02-01 20:10:30 +01:00
" You might need to add explicit type casts. " ) ,
2006-03-14 23:48:25 +01:00
parser_errposition ( pstate , location ) ) ) ;
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
else if ( list_length ( agg_order ) > 1 & & ! agg_within_group )
2010-08-05 23:45:35 +02:00
{
/* It's agg(x, ORDER BY y,z) ... perhaps misplaced ORDER BY */
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " function %s does not exist " ,
func_signature_string ( funcname , nargs , argnames ,
actual_arg_types ) ) ,
2011-04-10 17:42:00 +02:00
errhint ( " No aggregate function matches the given name and argument types. "
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
" Perhaps you misplaced ORDER BY; ORDER BY must appear "
2011-04-10 17:42:00 +02:00
" after all regular arguments of the aggregate. " ) ,
2010-08-05 23:45:35 +02:00
parser_errposition ( pstate , location ) ) ) ;
}
2003-07-04 04:51:34 +02:00
else
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " function %s does not exist " ,
2009-10-08 04:39:25 +02:00
func_signature_string ( funcname , nargs , argnames ,
2003-07-04 04:51:34 +02:00
actual_arg_types ) ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
errhint ( " No function matches the given name and argument types. "
" You might need to add explicit type casts. " ) ,
2006-03-14 23:48:25 +01:00
parser_errposition ( pstate , location ) ) ) ;
1998-07-08 16:04:11 +02:00
}
1997-11-25 23:07:18 +01:00
2008-12-18 19:20:35 +01:00
/*
* If there are default arguments , we have to include their types in
* actual_arg_types for the purpose of checking generic type consistency .
* However , we do NOT put them into the generated parse node , because
2014-05-06 18:12:18 +02:00
* their actual values might change before the query gets run . The
2008-12-18 19:20:35 +01:00
* planner has to insert the up - to - date values at plan time .
*/
nargsplusdefs = nargs ;
foreach ( l , argdefaults )
2008-12-04 18:51:28 +01:00
{
2009-06-11 16:49:15 +02:00
Node * expr = ( Node * ) lfirst ( l ) ;
2008-12-04 18:51:28 +01:00
2008-12-18 19:20:35 +01:00
/* probably shouldn't happen ... */
if ( nargsplusdefs > = FUNC_MAX_ARGS )
ereport ( ERROR ,
( errcode ( ERRCODE_TOO_MANY_ARGUMENTS ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
errmsg_plural ( " cannot pass more than %d argument to a function " ,
" cannot pass more than %d arguments to a function " ,
FUNC_MAX_ARGS ,
FUNC_MAX_ARGS ) ,
2008-12-18 19:20:35 +01:00
parser_errposition ( pstate , location ) ) ) ;
2008-12-04 18:51:28 +01:00
2008-12-18 19:20:35 +01:00
actual_arg_types [ nargsplusdefs + + ] = exprType ( expr ) ;
}
2008-12-04 18:51:28 +01:00
2003-04-09 01:20:04 +02:00
/*
2007-06-07 01:00:50 +02:00
* enforce consistency with polymorphic argument and return types ,
* possibly adjusting return type or declared_arg_types ( which will be
* used as the cast destination by make_fn_arguments )
2003-04-09 01:20:04 +02:00
*/
rettype = enforce_generic_type_consistency ( actual_arg_types ,
declared_arg_types ,
2008-12-18 19:20:35 +01:00
nargsplusdefs ,
2008-01-11 19:39:41 +01:00
rettype ,
false ) ;
2003-04-09 01:20:04 +02:00
2000-08-08 17:43:12 +02:00
/* perform the necessary typecasting of arguments */
2003-04-30 00:13:11 +02:00
make_fn_arguments ( pstate , fargs , actual_arg_types , declared_arg_types ) ;
1997-11-25 23:07:18 +01:00
Fix non-equivalence of VARIADIC and non-VARIADIC function call formats.
For variadic functions (other than VARIADIC ANY), the syntaxes foo(x,y,...)
and foo(VARIADIC ARRAY[x,y,...]) should be considered equivalent, since the
former is converted to the latter at parse time. They have indeed been
equivalent, in all releases before 9.3. However, commit 75b39e790 made an
ill-considered decision to record which syntax had been used in FuncExpr
nodes, and then to make equal() test that in checking node equality ---
which caused the syntaxes to not be seen as equivalent by the planner.
This is the underlying cause of bug #9817 from Dmitry Ryabov.
It might seem that a quick fix would be to make equal() disregard
FuncExpr.funcvariadic, but the same commit made that untenable, because
the field actually *is* semantically significant for some VARIADIC ANY
functions. This patch instead adopts the approach of redefining
funcvariadic (and aggvariadic, in HEAD) as meaning that the last argument
is a variadic array, whether it got that way by parser intervention or was
supplied explicitly by the user. Therefore the value will always be true
for non-ANY variadic functions, restoring the principle of equivalence.
(However, the planner will continue to consider use of VARIADIC as a
meaningful difference for VARIADIC ANY functions, even though some such
functions might disregard it.)
In HEAD, this change lets us simplify the decompilation logic in
ruleutils.c, since the funcvariadic/aggvariadic flag tells directly whether
to print VARIADIC. However, in 9.3 we have to continue to cope with
existing stored rules/views that might contain the previous definition.
Fortunately, this just means no change in ruleutils.c, since its existing
behavior effectively ignores funcvariadic for all cases other than VARIADIC
ANY functions.
In HEAD, bump catversion to reflect the fact that FuncExpr.funcvariadic
changed meanings; this is sort of pro forma, since I don't believe any
built-in views are affected.
Unfortunately, this patch doesn't magically fix everything for affected
9.3 users. After installing 9.3.5, they might need to recreate their
rules/views/indexes containing variadic function calls in order to get
everything consistent with the new definition. As in the cited bug,
the symptom of a problem would be failure to use a nominally matching
index that has a variadic function call in its definition. We'll need
to mention this in the 9.3.5 release notes.
2014-04-04 04:02:24 +02:00
/*
* If the function isn ' t actually variadic , forget any VARIADIC decoration
* on the call . ( Perhaps we should throw an error instead , but
* historically we ' ve allowed people to write that . )
*/
if ( ! OidIsValid ( vatype ) )
{
Assert ( nvargs = = 0 ) ;
func_variadic = false ;
}
2008-07-16 03:30:23 +02:00
/*
* If it ' s a variadic function call , transform the last nvargs arguments
* into an array - - - unless it ' s an " any " variadic .
*/
2014-04-03 22:57:45 +02:00
if ( nvargs > 0 & & vatype ! = ANYOID )
2008-07-16 03:30:23 +02:00
{
2009-06-11 16:49:15 +02:00
ArrayExpr * newa = makeNode ( ArrayExpr ) ;
int non_var_args = nargs - nvargs ;
List * vargs ;
2008-07-16 03:30:23 +02:00
Assert ( non_var_args > = 0 ) ;
vargs = list_copy_tail ( fargs , non_var_args ) ;
fargs = list_truncate ( fargs , non_var_args ) ;
newa - > elements = vargs ;
/* assume all the variadic arguments were coerced to the same type */
newa - > element_typeid = exprType ( ( Node * ) linitial ( vargs ) ) ;
newa - > array_typeid = get_array_type ( newa - > element_typeid ) ;
if ( ! OidIsValid ( newa - > array_typeid ) )
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_OBJECT ) ,
errmsg ( " could not find array type for data type %s " ,
2008-09-01 22:42:46 +02:00
format_type_be ( newa - > element_typeid ) ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
parser_errposition ( pstate , exprLocation ( ( Node * ) vargs ) ) ) ) ;
2011-03-20 01:29:08 +01:00
/* array_collid will be set by parse_collate.c */
2008-07-16 03:30:23 +02:00
newa - > multidims = false ;
2008-08-29 01:09:48 +02:00
newa - > location = exprLocation ( ( Node * ) vargs ) ;
2008-07-16 03:30:23 +02:00
fargs = lappend ( fargs , newa ) ;
Fix non-equivalence of VARIADIC and non-VARIADIC function call formats.
For variadic functions (other than VARIADIC ANY), the syntaxes foo(x,y,...)
and foo(VARIADIC ARRAY[x,y,...]) should be considered equivalent, since the
former is converted to the latter at parse time. They have indeed been
equivalent, in all releases before 9.3. However, commit 75b39e790 made an
ill-considered decision to record which syntax had been used in FuncExpr
nodes, and then to make equal() test that in checking node equality ---
which caused the syntaxes to not be seen as equivalent by the planner.
This is the underlying cause of bug #9817 from Dmitry Ryabov.
It might seem that a quick fix would be to make equal() disregard
FuncExpr.funcvariadic, but the same commit made that untenable, because
the field actually *is* semantically significant for some VARIADIC ANY
functions. This patch instead adopts the approach of redefining
funcvariadic (and aggvariadic, in HEAD) as meaning that the last argument
is a variadic array, whether it got that way by parser intervention or was
supplied explicitly by the user. Therefore the value will always be true
for non-ANY variadic functions, restoring the principle of equivalence.
(However, the planner will continue to consider use of VARIADIC as a
meaningful difference for VARIADIC ANY functions, even though some such
functions might disregard it.)
In HEAD, this change lets us simplify the decompilation logic in
ruleutils.c, since the funcvariadic/aggvariadic flag tells directly whether
to print VARIADIC. However, in 9.3 we have to continue to cope with
existing stored rules/views that might contain the previous definition.
Fortunately, this just means no change in ruleutils.c, since its existing
behavior effectively ignores funcvariadic for all cases other than VARIADIC
ANY functions.
In HEAD, bump catversion to reflect the fact that FuncExpr.funcvariadic
changed meanings; this is sort of pro forma, since I don't believe any
built-in views are affected.
Unfortunately, this patch doesn't magically fix everything for affected
9.3 users. After installing 9.3.5, they might need to recreate their
rules/views/indexes containing variadic function calls in order to get
everything consistent with the new definition. As in the cited bug,
the symptom of a problem would be failure to use a nominally matching
index that has a variadic function call in its definition. We'll need
to mention this in the 9.3.5 release notes.
2014-04-04 04:02:24 +02:00
/* We could not have had VARIADIC marking before ... */
Assert ( ! func_variadic ) ;
/* ... but now, it's a VARIADIC call */
func_variadic = true ;
2008-07-16 03:30:23 +02:00
}
2013-07-18 17:52:12 +02:00
/*
2014-04-03 22:57:45 +02:00
* If an " any " variadic is called with explicit VARIADIC marking , insist
* that the variadic parameter be of some array type .
2013-07-18 17:52:12 +02:00
*/
if ( nargs > 0 & & vatype = = ANYOID & & func_variadic )
{
2014-04-03 22:57:45 +02:00
Oid va_arr_typid = actual_arg_types [ nargs - 1 ] ;
2013-07-18 17:52:12 +02:00
2014-04-03 22:57:45 +02:00
if ( ! OidIsValid ( get_base_element_type ( va_arr_typid ) ) )
2013-07-18 17:52:12 +02:00
ereport ( ERROR ,
( errcode ( ERRCODE_DATATYPE_MISMATCH ) ,
errmsg ( " VARIADIC argument must be an array " ) ,
2014-04-03 22:57:45 +02:00
parser_errposition ( pstate ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
exprLocation ( ( Node * ) llast ( fargs ) ) ) ) ) ;
2013-07-18 17:52:12 +02:00
}
2016-09-13 19:54:24 +02:00
/* if it returns a set, check that's OK */
if ( retset )
Disallow set-returning functions inside CASE or COALESCE.
When we reimplemented SRFs in commit 69f4b9c85, our initial choice was
to allow the behavior to vary from historical practice in cases where a
SRF call appeared within a conditional-execution construct (currently,
only CASE or COALESCE). But that was controversial to begin with, and
subsequent discussion has resulted in a consensus that it's better to
throw an error instead of executing the query differently from before,
so long as we can provide a reasonably clear error message and a way to
rewrite the query.
Hence, add a parser mechanism to allow detection of such cases during
parse analysis. The mechanism just requires storing, in the ParseState,
a pointer to the set-returning FuncExpr or OpExpr most recently emitted
by parse analysis. Then the parsing functions for CASE and COALESCE can
detect the presence of a SRF in their arguments by noting whether this
pointer changes while analyzing their arguments. Furthermore, if it does,
it provides a suitable error cursor location for the complaint. (This
means that if there's more than one SRF in the arguments, the error will
point at the last one to be analyzed not the first. While connoisseurs of
parsing behavior might find that odd, it's unlikely the average user would
ever notice.)
While at it, we can also provide more specific error messages than before
about some pre-existing restrictions, such as no-SRFs-within-aggregates.
Also, reject at parse time cases where a NULLIF or IS DISTINCT FROM
construct would need to return a set. We've never supported that, but the
restriction is depended on in more subtle ways now, so it seems wise to
detect it at the start.
Also, provide some documentation about how to rewrite a SRF-within-CASE
query using a custom wrapper SRF.
It turns out that the information_schema.user_mapping_options view
contained an instance of exactly the behavior we're now forbidding; but
rewriting it makes it more clear and safer too.
initdb forced because of user_mapping_options change.
Patch by me, with error message suggestions from Alvaro Herrera and
Andres Freund, pursuant to a complaint from Regina Obe.
Discussion: https://postgr.es/m/000001d2d5de$d8d66170$8a832450$@pcorp.us
2017-06-14 05:46:39 +02:00
check_srf_call_placement ( pstate , last_srf , location ) ;
2016-09-13 19:54:24 +02:00
2002-04-11 22:00:18 +02:00
/* build the appropriate output structure */
2017-11-30 14:46:13 +01:00
if ( fdresult = = FUNCDETAIL_NORMAL | | fdresult = = FUNCDETAIL_PROCEDURE )
1997-11-25 23:07:18 +01:00
{
2002-12-12 16:49:42 +01:00
FuncExpr * funcexpr = makeNode ( FuncExpr ) ;
1997-11-25 23:07:18 +01:00
2002-12-12 16:49:42 +01:00
funcexpr - > funcid = funcid ;
funcexpr - > funcresulttype = rettype ;
funcexpr - > funcretset = retset ;
2013-01-22 02:25:26 +01:00
funcexpr - > funcvariadic = func_variadic ;
2002-12-12 16:49:42 +01:00
funcexpr - > funcformat = COERCE_EXPLICIT_CALL ;
2011-03-20 01:29:08 +01:00
/* funccollid and inputcollid will be set by parse_collate.c */
2002-12-12 16:49:42 +01:00
funcexpr - > args = fargs ;
2008-08-29 01:09:48 +02:00
funcexpr - > location = location ;
2001-05-19 02:37:45 +02:00
2002-12-12 16:49:42 +01:00
retval = ( Node * ) funcexpr ;
2001-05-19 02:37:45 +02:00
}
2008-12-28 19:54:01 +01:00
else if ( fdresult = = FUNCDETAIL_AGGREGATE & & ! over )
2001-05-19 02:37:45 +02:00
{
2002-04-11 22:00:18 +02:00
/* aggregate function */
Aggref * aggref = makeNode ( Aggref ) ;
2001-05-19 02:37:45 +02:00
2002-04-11 22:00:18 +02:00
aggref - > aggfnoid = funcid ;
2016-06-26 20:33:38 +02:00
aggref - > aggtype = rettype ;
2011-03-20 01:29:08 +01:00
/* aggcollid and inputcollid will be set by parse_collate.c */
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
aggref - > aggtranstype = InvalidOid ; /* will be set by planner */
Fix handling of argument and result datatypes for partial aggregation.
When doing partial aggregation, the args list of the upper (combining)
Aggref node is replaced by a Var representing the output of the partial
aggregation steps, which has either the aggregate's transition data type
or a serialized representation of that. However, nodeAgg.c blindly
continued to use the args list as an indication of the user-level argument
types. This broke resolution of polymorphic transition datatypes at
executor startup (though it accidentally failed to fail for the ANYARRAY
case, which is likely the only one anyone had tested). Moreover, the
constructed FuncExpr passed to the finalfunc contained completely wrong
information, which would have led to bogus answers or crashes for any case
where the finalfunc examined that information (which is only likely to be
with polymorphic aggregates using a non-polymorphic transition type).
As an independent bug, apply_partialaggref_adjustment neglected to resolve
a polymorphic transition datatype before assigning it as the output type
of the lower-level Aggref node. This again accidentally failed to fail
for ANYARRAY but would be unlikely to work in other cases.
To fix the first problem, record the user-level argument types in a
separate OID-list field of Aggref, and look to that rather than the args
list when asking what the argument types were. (It turns out to be
convenient to include any "direct" arguments in this list too, although
those are not currently subject to being overwritten.)
Rather than adding yet another resolve_aggregate_transtype() call to fix
the second problem, add an aggtranstype field to Aggref, and store the
resolved transition type OID there when the planner first computes it.
(By doing this in the planner and not the parser, we can allow the
aggregate's transition type to change from time to time, although no DDL
support yet exists for that.) This saves nothing of consequence for
simple non-polymorphic aggregates, but for polymorphic transition types
we save a catalog lookup during executor startup as well as several
planner lookups that are new in 9.6 due to parallel query planning.
In passing, fix an error that was introduced into count_agg_clauses_walker
some time ago: it was applying exprTypmod() to something that wasn't an
expression node at all, but a TargetEntry. exprTypmod silently returned
-1 so that there was not an obvious failure, but this broke the intended
sensitivity of aggregate space consumption estimates to the typmod of
varchar and similar data types. This part needs to be back-patched.
Catversion bump due to change of stored Aggref nodes.
Discussion: <8229.1466109074@sss.pgh.pa.us>
2016-06-18 03:44:37 +02:00
/* aggargtypes will be set by transformAggregateCall */
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
/* aggdirectargs and args will be set by transformAggregateCall */
/* aggorder and aggdistinct will be set by transformAggregateCall */
2013-07-17 02:15:36 +02:00
aggref - > aggfilter = agg_filter ;
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
aggref - > aggstar = agg_star ;
aggref - > aggvariadic = func_variadic ;
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
aggref - > aggkind = aggkind ;
2009-12-15 18:57:48 +01:00
/* agglevelsup will be set by transformAggregateCall */
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
aggref - > aggsplit = AGGSPLIT_SIMPLE ; /* planner might change this */
2008-08-29 01:09:48 +02:00
aggref - > location = location ;
2001-05-19 02:37:45 +02:00
2006-07-27 21:52:07 +02:00
/*
* Reject attempt to call a parameterless aggregate without ( * )
2014-05-06 18:12:18 +02:00
* syntax . This is mere pedantry but some folks insisted . . .
2006-07-27 21:52:07 +02:00
*/
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
if ( fargs = = NIL & & ! agg_star & & ! agg_within_group )
2006-07-27 21:52:07 +02:00
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
2006-10-04 02:30:14 +02:00
errmsg ( " %s(*) must be used to call a parameterless aggregate function " ,
NameListToString ( funcname ) ) ,
2006-07-27 21:52:07 +02:00
parser_errposition ( pstate , location ) ) ) ;
2008-12-28 19:54:01 +01:00
if ( retset )
ereport ( ERROR ,
( errcode ( ERRCODE_INVALID_FUNCTION_DEFINITION ) ,
errmsg ( " aggregates cannot return sets " ) ,
parser_errposition ( pstate , location ) ) ) ;
2009-10-08 04:39:25 +02:00
/*
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
* We might want to support named arguments later , but disallow it for
* now . We ' d need to figure out the parsed representation ( should the
* NamedArgExprs go above or below the TargetEntry nodes ? ) and then
2014-05-06 18:12:18 +02:00
* teach the planner to reorder the list properly . Or maybe we could
Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 23:08:38 +02:00
* make transformAggregateCall do that ? However , if you ' d also like
* to allow default arguments for aggregates , we ' d need to do it in
* planning to avoid semantic problems .
2009-10-08 04:39:25 +02:00
*/
if ( argnames ! = NIL )
ereport ( ERROR ,
( errcode ( ERRCODE_FEATURE_NOT_SUPPORTED ) ,
errmsg ( " aggregates cannot use named arguments " ) ,
parser_errposition ( pstate , location ) ) ) ;
2003-06-06 17:04:03 +02:00
/* parse_agg.c does additional aggregate-specific processing */
2010-03-17 17:52:38 +01:00
transformAggregateCall ( pstate , aggref , fargs , agg_order , agg_distinct ) ;
2003-06-06 17:04:03 +02:00
2002-04-11 22:00:18 +02:00
retval = ( Node * ) aggref ;
2008-12-28 19:54:01 +01:00
}
else
{
/* window function */
WindowFunc * wfunc = makeNode ( WindowFunc ) ;
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
Assert ( over ) ; /* lack of this was checked above */
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
Assert ( ! agg_within_group ) ; /* also checked above */
2008-12-28 19:54:01 +01:00
wfunc - > winfnoid = funcid ;
wfunc - > wintype = rettype ;
2011-03-20 01:29:08 +01:00
/* wincollid and inputcollid will be set by parse_collate.c */
2008-12-28 19:54:01 +01:00
wfunc - > args = fargs ;
/* winref will be set by transformWindowFuncCall */
wfunc - > winstar = agg_star ;
wfunc - > winagg = ( fdresult = = FUNCDETAIL_AGGREGATE ) ;
2013-07-17 02:15:36 +02:00
wfunc - > aggfilter = agg_filter ;
2008-12-28 19:54:01 +01:00
wfunc - > location = location ;
/*
* agg_star is allowed for aggregate functions but distinct isn ' t
*/
if ( agg_distinct )
ereport ( ERROR ,
( errcode ( ERRCODE_FEATURE_NOT_SUPPORTED ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
errmsg ( " DISTINCT is not implemented for window functions " ) ,
2008-12-28 19:54:01 +01:00
parser_errposition ( pstate , location ) ) ) ;
/*
* Reject attempt to call a parameterless aggregate without ( * )
2014-05-06 18:12:18 +02:00
* syntax . This is mere pedantry but some folks insisted . . .
2008-12-28 19:54:01 +01:00
*/
if ( wfunc - > winagg & & fargs = = NIL & & ! agg_star )
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
errmsg ( " %s(*) must be used to call a parameterless aggregate function " ,
NameListToString ( funcname ) ) ,
parser_errposition ( pstate , location ) ) ) ;
2001-05-19 02:37:45 +02:00
2013-07-17 02:15:36 +02:00
/*
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
* ordered aggs not allowed in windows yet
2013-07-17 02:15:36 +02:00
*/
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
if ( agg_order ! = NIL )
2013-07-17 02:15:36 +02:00
ereport ( ERROR ,
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
( errcode ( ERRCODE_FEATURE_NOT_SUPPORTED ) ,
errmsg ( " aggregate ORDER BY is not implemented for window functions " ) ,
2013-07-17 02:15:36 +02:00
parser_errposition ( pstate , location ) ) ) ;
2009-12-15 18:57:48 +01:00
/*
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
* FILTER is not yet supported with true window functions
2009-12-15 18:57:48 +01:00
*/
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
if ( ! wfunc - > winagg & & agg_filter )
2009-12-15 18:57:48 +01:00
ereport ( ERROR ,
( errcode ( ERRCODE_FEATURE_NOT_SUPPORTED ) ,
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
errmsg ( " FILTER is not implemented for non-aggregate window functions " ) ,
2009-12-15 18:57:48 +01:00
parser_errposition ( pstate , location ) ) ) ;
Disallow set-returning functions inside CASE or COALESCE.
When we reimplemented SRFs in commit 69f4b9c85, our initial choice was
to allow the behavior to vary from historical practice in cases where a
SRF call appeared within a conditional-execution construct (currently,
only CASE or COALESCE). But that was controversial to begin with, and
subsequent discussion has resulted in a consensus that it's better to
throw an error instead of executing the query differently from before,
so long as we can provide a reasonably clear error message and a way to
rewrite the query.
Hence, add a parser mechanism to allow detection of such cases during
parse analysis. The mechanism just requires storing, in the ParseState,
a pointer to the set-returning FuncExpr or OpExpr most recently emitted
by parse analysis. Then the parsing functions for CASE and COALESCE can
detect the presence of a SRF in their arguments by noting whether this
pointer changes while analyzing their arguments. Furthermore, if it does,
it provides a suitable error cursor location for the complaint. (This
means that if there's more than one SRF in the arguments, the error will
point at the last one to be analyzed not the first. While connoisseurs of
parsing behavior might find that odd, it's unlikely the average user would
ever notice.)
While at it, we can also provide more specific error messages than before
about some pre-existing restrictions, such as no-SRFs-within-aggregates.
Also, reject at parse time cases where a NULLIF or IS DISTINCT FROM
construct would need to return a set. We've never supported that, but the
restriction is depended on in more subtle ways now, so it seems wise to
detect it at the start.
Also, provide some documentation about how to rewrite a SRF-within-CASE
query using a custom wrapper SRF.
It turns out that the information_schema.user_mapping_options view
contained an instance of exactly the behavior we're now forbidding; but
rewriting it makes it more clear and safer too.
initdb forced because of user_mapping_options change.
Patch by me, with error message suggestions from Alvaro Herrera and
Andres Freund, pursuant to a complaint from Regina Obe.
Discussion: https://postgr.es/m/000001d2d5de$d8d66170$8a832450$@pcorp.us
2017-06-14 05:46:39 +02:00
/*
* Window functions can ' t either take or return sets
*/
if ( pstate - > p_last_srf ! = last_srf )
ereport ( ERROR ,
( errcode ( ERRCODE_FEATURE_NOT_SUPPORTED ) ,
errmsg ( " window function calls cannot contain set-returning function calls " ) ,
errhint ( " You might be able to move the set-returning function into a LATERAL FROM item. " ) ,
parser_errposition ( pstate ,
exprLocation ( pstate - > p_last_srf ) ) ) ) ;
2002-04-11 22:00:18 +02:00
if ( retset )
2003-07-19 01:20:33 +02:00
ereport ( ERROR ,
( errcode ( ERRCODE_INVALID_FUNCTION_DEFINITION ) ,
2008-12-28 19:54:01 +01:00
errmsg ( " window functions cannot return sets " ) ,
2006-03-14 23:48:25 +01:00
parser_errposition ( pstate , location ) ) ) ;
2008-12-28 19:54:01 +01:00
/* parse_agg.c does additional window-func-specific processing */
transformWindowFuncCall ( pstate , wfunc , over ) ;
retval = ( Node * ) wfunc ;
2002-04-11 22:00:18 +02:00
}
Disallow set-returning functions inside CASE or COALESCE.
When we reimplemented SRFs in commit 69f4b9c85, our initial choice was
to allow the behavior to vary from historical practice in cases where a
SRF call appeared within a conditional-execution construct (currently,
only CASE or COALESCE). But that was controversial to begin with, and
subsequent discussion has resulted in a consensus that it's better to
throw an error instead of executing the query differently from before,
so long as we can provide a reasonably clear error message and a way to
rewrite the query.
Hence, add a parser mechanism to allow detection of such cases during
parse analysis. The mechanism just requires storing, in the ParseState,
a pointer to the set-returning FuncExpr or OpExpr most recently emitted
by parse analysis. Then the parsing functions for CASE and COALESCE can
detect the presence of a SRF in their arguments by noting whether this
pointer changes while analyzing their arguments. Furthermore, if it does,
it provides a suitable error cursor location for the complaint. (This
means that if there's more than one SRF in the arguments, the error will
point at the last one to be analyzed not the first. While connoisseurs of
parsing behavior might find that odd, it's unlikely the average user would
ever notice.)
While at it, we can also provide more specific error messages than before
about some pre-existing restrictions, such as no-SRFs-within-aggregates.
Also, reject at parse time cases where a NULLIF or IS DISTINCT FROM
construct would need to return a set. We've never supported that, but the
restriction is depended on in more subtle ways now, so it seems wise to
detect it at the start.
Also, provide some documentation about how to rewrite a SRF-within-CASE
query using a custom wrapper SRF.
It turns out that the information_schema.user_mapping_options view
contained an instance of exactly the behavior we're now forbidding; but
rewriting it makes it more clear and safer too.
initdb forced because of user_mapping_options change.
Patch by me, with error message suggestions from Alvaro Herrera and
Andres Freund, pursuant to a complaint from Regina Obe.
Discussion: https://postgr.es/m/000001d2d5de$d8d66170$8a832450$@pcorp.us
2017-06-14 05:46:39 +02:00
/* if it returns a set, remember it for error checks at higher levels */
if ( retset )
pstate - > p_last_srf = retval ;
2002-04-11 22:00:18 +02:00
return retval ;
}
2001-05-19 02:37:45 +02:00
2003-05-26 02:11:29 +02:00
/* func_match_argtypes()
2002-04-06 08:59:25 +02:00
*
2003-05-26 02:11:29 +02:00
* Given a list of candidate functions ( having the right name and number
* of arguments ) and an array of input datatype OIDs , produce a shortlist of
* those candidates that actually accept the input datatypes ( either exactly
* or by coercion ) , and return the number of such candidates .
*
* Note that can_coerce_type will assume that UNKNOWN inputs are coercible to
* anything , so candidates will not be eliminated on that basis .
2002-04-06 08:59:25 +02:00
*
* NB : okay to modify input list structure , as long as we find at least
2003-05-26 02:11:29 +02:00
* one match . If no match at all , the list must remain unmodified .
1997-11-25 23:07:18 +01:00
*/
2003-05-26 02:11:29 +02:00
int
func_match_argtypes ( int nargs ,
Oid * input_typeids ,
FuncCandidateList raw_candidates ,
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
FuncCandidateList * candidates ) /* return value */
1997-11-25 23:07:18 +01:00
{
2002-04-06 08:59:25 +02:00
FuncCandidateList current_candidate ;
FuncCandidateList next_candidate ;
1997-11-25 23:07:18 +01:00
int ncandidates = 0 ;
* candidates = NULL ;
2003-05-26 02:11:29 +02:00
for ( current_candidate = raw_candidates ;
1997-11-25 23:07:18 +01:00
current_candidate ! = NULL ;
2002-04-06 08:59:25 +02:00
current_candidate = next_candidate )
1997-11-25 23:07:18 +01:00
{
2002-04-06 08:59:25 +02:00
next_candidate = current_candidate - > next ;
2002-04-11 22:00:18 +02:00
if ( can_coerce_type ( nargs , input_typeids , current_candidate - > args ,
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
COERCION_IMPLICIT ) )
1997-11-25 23:07:18 +01:00
{
2002-04-06 08:59:25 +02:00
current_candidate - > next = * candidates ;
* candidates = current_candidate ;
1997-11-25 23:07:18 +01:00
ncandidates + + ;
}
}
return ncandidates ;
2017-06-21 20:39:04 +02:00
} /* func_match_argtypes() */
1997-11-25 23:07:18 +01:00
1998-05-10 01:31:34 +02:00
/* func_select_candidate()
2003-05-26 02:11:29 +02:00
* Given the input argtype array and more than one candidate
* for the function , attempt to resolve the conflict .
*
2002-04-06 08:59:25 +02:00
* Returns the selected candidate if the conflict can be resolved ,
1998-05-10 01:31:34 +02:00
* otherwise returns NULL .
2000-03-19 01:19:39 +01:00
*
2003-05-26 02:11:29 +02:00
* Note that the caller has already determined that there is no candidate
* exactly matching the input argtypes , and has pruned away any " candidates "
* that aren ' t actually coercion - compatible with the input types .
*
* This is also used for resolving ambiguous operator references . Formerly
* parse_oper . c had its own , essentially duplicate code for the purpose .
* The following comments ( formerly in parse_oper . c ) are kept to record some
* of the history of these heuristics .
*
* OLD COMMENTS :
*
* This routine is new code , replacing binary_oper_select_candidate ( )
* which dates from v4 .2 / v1 .0 . x days . It tries very hard to match up
* operators with types , including allowing type coercions if necessary .
* The important thing is that the code do as much as possible ,
* while _never_ doing the wrong thing , where " the wrong thing " would
* be returning an operator when other better choices are available ,
* or returning an operator which is a non - intuitive possibility .
* - thomas 1998 - 05 - 21
*
* The comments below came from binary_oper_select_candidate ( ) , and
* illustrate the issues and choices which are possible :
* - thomas 1998 - 05 - 20
*
* current wisdom holds that the default operator should be one in which
* both operands have the same type ( there will only be one such
* operator )
*
* 7.27 .93 - I have decided not to do this ; it ' s too hard to justify , and
* it ' s easy enough to typecast explicitly - avi
* [ the rest of this routine was commented out since then - ay ]
*
* 6 / 23 / 95 - I don ' t complete agree with avi . In particular , casting
* floats is a pain for users . Whatever the rationale behind not doing
* this is , I need the following special case to work .
*
* In the WHERE clause of a query , if a float is specified without
* quotes , we treat it as float8 . I added the float48 * operators so
* that we can operate on float4 and float8 . But now we have more than
* one matching operator if the right arg is unknown ( eg . float
* specified with quotes ) . This break some stuff in the regression
* test where there are floats in quotes not properly casted . Below is
* the solution . In addition to requiring the operator operates on the
* same type for both operands [ as in the code Avi originally
* commented out ] , we also require that the operators be equivalent in
* some sense . ( see equivalentOpersAfterPromotion for details . )
* - ay 6 / 95
1997-11-25 23:07:18 +01:00
*/
2003-05-26 02:11:29 +02:00
FuncCandidateList
1997-11-25 23:07:18 +01:00
func_select_candidate ( int nargs ,
Oid * input_typeids ,
2002-04-06 08:59:25 +02:00
FuncCandidateList candidates )
1997-11-25 23:07:18 +01:00
{
2011-11-18 00:28:41 +01:00
FuncCandidateList current_candidate ,
first_candidate ,
last_candidate ;
1998-09-01 06:40:42 +02:00
Oid * current_typeids ;
2000-12-15 20:22:03 +01:00
Oid current_type ;
1998-09-01 06:40:42 +02:00
int i ;
int ncandidates ;
int nbestMatch ,
2011-11-18 00:28:41 +01:00
nmatch ,
nunknowns ;
2003-05-26 02:11:29 +02:00
Oid input_base_typeids [ FUNC_MAX_ARGS ] ;
2009-06-11 16:49:15 +02:00
TYPCATEGORY slot_category [ FUNC_MAX_ARGS ] ,
1998-09-01 06:40:42 +02:00
current_category ;
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
bool current_is_preferred ;
2000-12-15 20:22:03 +01:00
bool slot_has_preferred_type [ FUNC_MAX_ARGS ] ;
bool resolved_unknowns ;
1998-05-10 01:31:34 +02:00
2005-03-29 05:01:32 +02:00
/* protect local fixed-size arrays */
if ( nargs > FUNC_MAX_ARGS )
ereport ( ERROR ,
( errcode ( ERRCODE_TOO_MANY_ARGUMENTS ) ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
errmsg_plural ( " cannot pass more than %d argument to a function " ,
" cannot pass more than %d arguments to a function " ,
FUNC_MAX_ARGS ,
FUNC_MAX_ARGS ) ) ) ;
2005-03-29 05:01:32 +02:00
2000-03-19 01:19:39 +01:00
/*
2005-10-15 04:49:52 +02:00
* If any input types are domains , reduce them to their base types . This
* ensures that we will consider functions on the base type to be " exact
* matches " in the exact-match heuristic; it also makes it possible to do
* something useful with the type - category heuristics . Note that this
* makes it difficult , but not impossible , to use functions declared to
2014-05-06 18:12:18 +02:00
* take a domain as an input datatype . Such a function will be selected
2005-10-15 04:49:52 +02:00
* over the base - type function only if it is an exact match at all
* argument positions , and so was already chosen by our caller .
2011-11-18 00:28:41 +01:00
*
* While we ' re at it , count the number of unknown - type arguments for use
* later .
2000-03-19 01:19:39 +01:00
*/
2011-11-18 00:28:41 +01:00
nunknowns = 0 ;
2003-05-26 02:11:29 +02:00
for ( i = 0 ; i < nargs ; i + + )
2011-11-18 00:28:41 +01:00
{
if ( input_typeids [ i ] ! = UNKNOWNOID )
input_base_typeids [ i ] = getBaseType ( input_typeids [ i ] ) ;
else
{
/* no need to call getBaseType on UNKNOWNOID */
input_base_typeids [ i ] = UNKNOWNOID ;
nunknowns + + ;
}
}
2000-03-19 01:19:39 +01:00
/*
2003-05-26 02:11:29 +02:00
* Run through all candidates and keep those with the most matches on
* exact types . Keep all candidates if none match .
2000-03-19 01:19:39 +01:00
*/
1998-05-10 01:31:34 +02:00
ncandidates = 0 ;
nbestMatch = 0 ;
last_candidate = NULL ;
for ( current_candidate = candidates ;
current_candidate ! = NULL ;
current_candidate = current_candidate - > next )
{
current_typeids = current_candidate - > args ;
nmatch = 0 ;
for ( i = 0 ; i < nargs ; i + + )
{
2003-05-26 02:11:29 +02:00
if ( input_base_typeids [ i ] ! = UNKNOWNOID & &
current_typeids [ i ] = = input_base_typeids [ i ] )
nmatch + + ;
1998-05-10 01:31:34 +02:00
}
2000-02-20 07:35:08 +01:00
/* take this one as the best choice so far? */
1998-05-10 01:31:34 +02:00
if ( ( nmatch > nbestMatch ) | | ( last_candidate = = NULL ) )
{
nbestMatch = nmatch ;
candidates = current_candidate ;
last_candidate = current_candidate ;
ncandidates = 1 ;
}
2000-02-20 07:35:08 +01:00
/* no worse than the last choice, so keep this one too? */
1998-05-10 01:31:34 +02:00
else if ( nmatch = = nbestMatch )
{
last_candidate - > next = current_candidate ;
last_candidate = current_candidate ;
ncandidates + + ;
}
2000-02-20 07:35:08 +01:00
/* otherwise, don't bother keeping this one... */
1998-05-10 01:31:34 +02:00
}
2000-02-20 07:35:08 +01:00
if ( last_candidate ) /* terminate rebuilt list */
last_candidate - > next = NULL ;
1999-02-23 08:51:53 +01:00
if ( ncandidates = = 1 )
2002-04-06 08:59:25 +02:00
return candidates ;
1999-02-23 08:51:53 +01:00
2000-03-19 01:19:39 +01:00
/*
2005-10-15 04:49:52 +02:00
* Still too many candidates ? Now look for candidates which have either
* exact matches or preferred types at the args that will require
* coercion . ( Restriction added in 7.4 : preferred type must be of same
* category as input type ; give no preference to cross - category
* conversions to preferred types . ) Keep all candidates if none match .
2000-03-19 01:19:39 +01:00
*/
2003-08-04 02:43:34 +02:00
for ( i = 0 ; i < nargs ; i + + ) /* avoid multiple lookups */
2003-05-26 02:11:29 +02:00
slot_category [ i ] = TypeCategory ( input_base_typeids [ i ] ) ;
2000-03-19 01:19:39 +01:00
ncandidates = 0 ;
nbestMatch = 0 ;
last_candidate = NULL ;
for ( current_candidate = candidates ;
current_candidate ! = NULL ;
current_candidate = current_candidate - > next )
{
current_typeids = current_candidate - > args ;
nmatch = 0 ;
for ( i = 0 ; i < nargs ; i + + )
{
2003-05-26 02:11:29 +02:00
if ( input_base_typeids [ i ] ! = UNKNOWNOID )
2000-03-19 01:19:39 +01:00
{
2003-05-26 02:11:29 +02:00
if ( current_typeids [ i ] = = input_base_typeids [ i ] | |
IsPreferredType ( slot_category [ i ] , current_typeids [ i ] ) )
2000-03-19 01:19:39 +01:00
nmatch + + ;
}
}
if ( ( nmatch > nbestMatch ) | | ( last_candidate = = NULL ) )
{
nbestMatch = nmatch ;
candidates = current_candidate ;
last_candidate = current_candidate ;
ncandidates = 1 ;
}
else if ( nmatch = = nbestMatch )
{
last_candidate - > next = current_candidate ;
last_candidate = current_candidate ;
ncandidates + + ;
}
}
if ( last_candidate ) /* terminate rebuilt list */
last_candidate - > next = NULL ;
if ( ncandidates = = 1 )
2002-04-06 08:59:25 +02:00
return candidates ;
2000-03-19 01:19:39 +01:00
/*
2011-11-18 00:28:41 +01:00
* Still too many candidates ? Try assigning types for the unknown inputs .
2003-05-26 02:11:29 +02:00
*
2011-11-18 00:28:41 +01:00
* If there are no unknown inputs , we have no more heuristics that apply ,
* and must fail .
*/
if ( nunknowns = = 0 )
return NULL ; /* failed to select a best candidate */
/*
* The next step examines each unknown argument position to see if we can
2014-05-06 18:12:18 +02:00
* determine a " type category " for it . If any candidate has an input
2005-10-15 04:49:52 +02:00
* datatype of STRING category , use STRING category ( this bias towards
* STRING is appropriate since unknown - type literals look like strings ) .
* Otherwise , if all the candidates agree on the type category of this
* argument position , use that category . Otherwise , fail because we
* cannot determine a category .
2000-03-19 01:19:39 +01:00
*
2005-10-15 04:49:52 +02:00
* If we are able to determine a type category , also notice whether any of
* the candidates takes a preferred datatype within the category .
2000-12-15 20:22:03 +01:00
*
2005-11-22 19:17:34 +01:00
* Having completed this examination , remove candidates that accept the
2014-05-06 18:12:18 +02:00
* wrong category at any unknown position . Also , if at least one
2005-11-22 19:17:34 +01:00
* candidate accepted a preferred type at a position , remove candidates
2012-06-10 21:20:04 +02:00
* that accept non - preferred types . If just one candidate remains , return
* that one . However , if this rule turns out to reject all candidates ,
* keep them all instead .
2000-03-19 01:19:39 +01:00
*/
2000-12-15 20:22:03 +01:00
resolved_unknowns = false ;
1999-02-23 08:51:53 +01:00
for ( i = 0 ; i < nargs ; i + + )
1998-05-10 01:31:34 +02:00
{
2001-03-22 05:01:46 +01:00
bool have_conflict ;
2000-12-15 20:22:03 +01:00
2003-05-26 02:11:29 +02:00
if ( input_base_typeids [ i ] ! = UNKNOWNOID )
2000-12-15 20:22:03 +01:00
continue ;
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
resolved_unknowns = true ; /* assume we can do it */
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
slot_category [ i ] = TYPCATEGORY_INVALID ;
2000-12-15 20:22:03 +01:00
slot_has_preferred_type [ i ] = false ;
have_conflict = false ;
for ( current_candidate = candidates ;
current_candidate ! = NULL ;
current_candidate = current_candidate - > next )
1998-05-10 01:31:34 +02:00
{
2000-12-15 20:22:03 +01:00
current_typeids = current_candidate - > args ;
current_type = current_typeids [ i ] ;
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
get_type_category_preferred ( current_type ,
& current_category ,
& current_is_preferred ) ;
if ( slot_category [ i ] = = TYPCATEGORY_INVALID )
1998-05-10 01:31:34 +02:00
{
2000-12-15 20:22:03 +01:00
/* first candidate */
slot_category [ i ] = current_category ;
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
slot_has_preferred_type [ i ] = current_is_preferred ;
2000-12-15 20:22:03 +01:00
}
else if ( current_category = = slot_category [ i ] )
{
/* more candidates in same category */
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
slot_has_preferred_type [ i ] | = current_is_preferred ;
2000-12-15 20:22:03 +01:00
}
else
{
/* category conflict! */
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
if ( current_category = = TYPCATEGORY_STRING )
1999-02-23 08:51:53 +01:00
{
2000-12-15 20:22:03 +01:00
/* STRING always wins if available */
slot_category [ i ] = current_category ;
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
slot_has_preferred_type [ i ] = current_is_preferred ;
1999-02-23 08:51:53 +01:00
}
2000-12-15 20:22:03 +01:00
else
2000-02-20 07:35:08 +01:00
{
2001-03-22 05:01:46 +01:00
/*
2005-10-15 04:49:52 +02:00
* Remember conflict , but keep going ( might find STRING )
2001-03-22 05:01:46 +01:00
*/
2000-12-15 20:22:03 +01:00
have_conflict = true ;
2000-02-20 07:35:08 +01:00
}
2000-12-15 20:22:03 +01:00
}
}
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
if ( have_conflict & & slot_category [ i ] ! = TYPCATEGORY_STRING )
2000-12-15 20:22:03 +01:00
{
/* Failed to resolve category conflict at this position */
resolved_unknowns = false ;
break ;
}
}
if ( resolved_unknowns )
{
/* Strip non-matching candidates */
ncandidates = 0 ;
2011-11-18 00:28:41 +01:00
first_candidate = candidates ;
2000-12-15 20:22:03 +01:00
last_candidate = NULL ;
for ( current_candidate = candidates ;
current_candidate ! = NULL ;
current_candidate = current_candidate - > next )
{
2001-03-22 05:01:46 +01:00
bool keepit = true ;
2000-12-15 20:22:03 +01:00
current_typeids = current_candidate - > args ;
for ( i = 0 ; i < nargs ; i + + )
{
2003-05-26 02:11:29 +02:00
if ( input_base_typeids [ i ] ! = UNKNOWNOID )
2000-12-15 20:22:03 +01:00
continue ;
current_type = current_typeids [ i ] ;
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
get_type_category_preferred ( current_type ,
& current_category ,
& current_is_preferred ) ;
2000-12-15 20:22:03 +01:00
if ( current_category ! = slot_category [ i ] )
1999-02-23 08:51:53 +01:00
{
2000-12-15 20:22:03 +01:00
keepit = false ;
break ;
2000-02-20 07:35:08 +01:00
}
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
if ( slot_has_preferred_type [ i ] & & ! current_is_preferred )
2000-02-20 07:35:08 +01:00
{
2000-12-15 20:22:03 +01:00
keepit = false ;
break ;
1998-05-10 01:31:34 +02:00
}
}
2000-12-15 20:22:03 +01:00
if ( keepit )
{
/* keep this candidate */
last_candidate = current_candidate ;
ncandidates + + ;
}
else
{
/* forget this candidate */
if ( last_candidate )
last_candidate - > next = current_candidate - > next ;
else
2011-11-18 00:28:41 +01:00
first_candidate = current_candidate - > next ;
2000-12-15 20:22:03 +01:00
}
1999-02-23 08:51:53 +01:00
}
2011-11-18 00:28:41 +01:00
/* if we found any matches, restrict our attention to those */
if ( last_candidate )
{
candidates = first_candidate ;
/* terminate rebuilt list */
2000-12-15 20:22:03 +01:00
last_candidate - > next = NULL ;
2011-11-18 00:28:41 +01:00
}
if ( ncandidates = = 1 )
return candidates ;
1998-05-10 01:31:34 +02:00
}
2011-11-18 00:28:41 +01:00
/*
* Last gasp : if there are both known - and unknown - type inputs , and all
* the known types are the same , assume the unknown inputs are also that
* type , and see if that gives us a unique match . If so , use that match .
*
* NOTE : for a binary operator with one unknown and one non - unknown input ,
2014-05-06 18:12:18 +02:00
* we already tried this heuristic in binary_oper_exact ( ) . However , that
2011-11-18 00:28:41 +01:00
* code only finds exact matches , whereas here we will handle matches that
* involve coercion , polymorphic type resolution , etc .
*/
if ( nunknowns < nargs )
{
Oid known_type = UNKNOWNOID ;
for ( i = 0 ; i < nargs ; i + + )
{
if ( input_base_typeids [ i ] = = UNKNOWNOID )
continue ;
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
if ( known_type = = UNKNOWNOID ) /* first known arg? */
2011-11-18 00:28:41 +01:00
known_type = input_base_typeids [ i ] ;
else if ( known_type ! = input_base_typeids [ i ] )
{
/* oops, not all match */
known_type = UNKNOWNOID ;
break ;
}
}
if ( known_type ! = UNKNOWNOID )
{
/* okay, just one known type, apply the heuristic */
for ( i = 0 ; i < nargs ; i + + )
input_base_typeids [ i ] = known_type ;
ncandidates = 0 ;
last_candidate = NULL ;
for ( current_candidate = candidates ;
current_candidate ! = NULL ;
current_candidate = current_candidate - > next )
{
current_typeids = current_candidate - > args ;
if ( can_coerce_type ( nargs , input_base_typeids , current_typeids ,
COERCION_IMPLICIT ) )
{
if ( + + ncandidates > 1 )
break ; /* not unique, give up */
last_candidate = current_candidate ;
}
}
if ( ncandidates = = 1 )
{
/* successfully identified a unique match */
last_candidate - > next = NULL ;
return last_candidate ;
}
}
}
2000-11-06 16:42:30 +01:00
2003-05-26 02:11:29 +02:00
return NULL ; /* failed to select a best candidate */
2017-06-21 20:39:04 +02:00
} /* func_select_candidate() */
1998-05-10 01:31:34 +02:00
/* func_get_detail()
2001-10-05 00:06:46 +02:00
*
1998-05-10 01:31:34 +02:00
* Find the named function in the system catalogs .
*
* Attempt to find the named function in the system catalogs with
2008-07-16 03:30:23 +02:00
* arguments exactly as specified , so that the normal case ( exact match )
* is as quick as possible .
1998-05-10 01:31:34 +02:00
*
* If an exact match isn ' t found :
2007-06-05 23:31:09 +02:00
* 1 ) check for possible interpretation as a type coercion request
2008-07-16 03:30:23 +02:00
* 2 ) apply the ambiguous - function resolution rules
2002-05-03 22:15:02 +02:00
*
2009-10-08 04:39:25 +02:00
* Return values * funcid through * true_typeids receive info about the function .
* If argdefaults isn ' t NULL , * argdefaults receives a list of any default
* argument expressions that need to be added to the given arguments .
*
* When processing a named - or mixed - notation call ( ie , fargnames isn ' t NIL ) ,
* the returned true_typeids and argdefaults are ordered according to the
* call ' s argument ordering : first any positional arguments , then the named
* arguments , then defaulted arguments ( if needed and allowed by
* expand_defaults ) . Some care is needed if this information is to be compared
* to the function ' s pg_proc entry , but in practice the caller can usually
* just work with the call ' s argument ordering .
*
* We rely primarily on fargnames / nargs / argtypes as the argument description .
2002-05-03 22:15:02 +02:00
* The actual expression node list is passed in fargs so that we can check
2009-10-08 04:39:25 +02:00
* for type coercion of a constant . Some callers pass fargs = = NIL indicating
* they don ' t need that check made . Note also that when fargnames isn ' t NIL ,
* the fargs list must be passed if the caller wants actual argument position
* information to be returned into the NamedArgExpr nodes .
1998-05-10 01:31:34 +02:00
*/
2001-10-05 00:06:46 +02:00
FuncDetailCode
2002-04-09 22:35:55 +02:00
func_get_detail ( List * funcname ,
2001-10-05 00:06:46 +02:00
List * fargs ,
2009-10-08 04:39:25 +02:00
List * fargnames ,
1997-11-25 23:07:18 +01:00
int nargs ,
2000-08-20 02:44:19 +02:00
Oid * argtypes ,
2008-07-16 03:30:23 +02:00
bool expand_variadic ,
2008-12-18 19:20:35 +01:00
bool expand_defaults ,
1998-09-01 06:40:42 +02:00
Oid * funcid , /* return value */
Oid * rettype , /* return value */
bool * retset , /* return value */
2008-07-16 03:30:23 +02:00
int * nvargs , /* return value */
2013-07-18 17:52:12 +02:00
Oid * vatype , /* return value */
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
Oid * * true_typeids , /* return value */
List * * argdefaults ) /* optional return value */
1997-11-25 23:07:18 +01:00
{
2003-05-26 02:11:29 +02:00
FuncCandidateList raw_candidates ;
2002-04-06 08:59:25 +02:00
FuncCandidateList best_candidate ;
1997-11-25 23:07:18 +01:00
2015-06-27 23:47:39 +02:00
/* Passing NULL for argtypes is no longer allowed */
Assert ( argtypes ) ;
2009-04-24 18:09:50 +02:00
/* initialize output arguments to silence compiler warnings */
* funcid = InvalidOid ;
* rettype = InvalidOid ;
* retset = false ;
* nvargs = 0 ;
2014-04-03 22:57:45 +02:00
* vatype = InvalidOid ;
2009-04-24 18:09:50 +02:00
* true_typeids = NULL ;
if ( argdefaults )
2009-06-11 16:49:15 +02:00
* argdefaults = NIL ;
2009-04-24 18:09:50 +02:00
2002-04-06 08:59:25 +02:00
/* Get list of possible candidates from namespace search */
2009-10-08 04:39:25 +02:00
raw_candidates = FuncnameGetCandidates ( funcname , nargs , fargnames ,
2014-01-23 18:40:29 +01:00
expand_variadic , expand_defaults ,
false ) ;
1997-11-25 23:07:18 +01:00
2002-04-06 08:59:25 +02:00
/*
2005-10-15 04:49:52 +02:00
* Quickly check if there is an exact match to the input datatypes ( there
* can be only one )
2002-04-06 08:59:25 +02:00
*/
2003-05-26 02:11:29 +02:00
for ( best_candidate = raw_candidates ;
2002-04-06 08:59:25 +02:00
best_candidate ! = NULL ;
best_candidate = best_candidate - > next )
1997-11-25 23:07:18 +01:00
{
2002-04-06 08:59:25 +02:00
if ( memcmp ( argtypes , best_candidate - > args , nargs * sizeof ( Oid ) ) = = 0 )
break ;
2000-03-19 01:19:39 +01:00
}
2002-04-06 08:59:25 +02:00
if ( best_candidate = = NULL )
2000-03-19 01:19:39 +01:00
{
2001-10-05 00:06:46 +02:00
/*
* If we didn ' t find an exact match , next consider the possibility
* that this is really a type - coercion request : a single - argument
2005-10-15 04:49:52 +02:00
* function call where the function name is a type name . If so , and
2007-06-05 23:31:09 +02:00
* if the coercion path is RELABELTYPE or COERCEVIAIO , then go ahead
* and treat the " function call " as a coercion .
*
2005-10-15 04:49:52 +02:00
* This interpretation needs to be given higher priority than
* interpretations involving a type coercion followed by a function
* call , otherwise we can produce surprising results . For example , we
2007-11-15 22:14:46 +01:00
* want " text(varchar) " to be interpreted as a simple coercion , not as
* " text(name(varchar)) " which the code below this point is entirely
* capable of selecting .
2001-10-05 00:06:46 +02:00
*
2007-06-05 23:31:09 +02:00
* We also treat a coercion of a previously - unknown - type literal
* constant to a specific type this way .
*
* The reason we reject COERCION_PATH_FUNC here is that we expect the
* cast implementation function to be named after the target type .
* Thus the function will be found by normal lookup if appropriate .
2001-10-05 00:06:46 +02:00
*
2007-11-15 22:14:46 +01:00
* The reason we reject COERCION_PATH_ARRAYCOERCE is mainly that you
* can ' t write " foo[] (something) " as a function call . In theory
2007-06-05 23:31:09 +02:00
* someone might want to invoke it as " _foo (something) " but we have
* never supported that historically , so we can insist that people
2010-11-07 19:03:19 +01:00
* write it as a normal cast instead .
*
* We also reject the specific case of COERCEVIAIO for a composite
* source type and a string - category target type . This is a case that
* find_coercion_pathway ( ) allows by default , but experience has shown
* that it ' s too commonly invoked by mistake . So , again , insist that
* people use cast syntax if they want to do that .
2002-10-25 00:09:00 +02:00
*
2007-06-05 23:31:09 +02:00
* NB : it ' s important that this code does not exceed what coerce_type
* can do , because the caller will try to apply coerce_type if we
2014-05-06 18:12:18 +02:00
* return FUNCDETAIL_COERCION . If we return that result for something
2007-06-05 23:31:09 +02:00
* coerce_type can ' t handle , we ' ll cause infinite recursion between
* this module and coerce_type !
2001-10-05 00:06:46 +02:00
*/
2009-10-08 04:39:25 +02:00
if ( nargs = = 1 & & fargs ! = NIL & & fargnames = = NIL )
2001-10-05 00:06:46 +02:00
{
2007-11-11 20:22:49 +01:00
Oid targetType = FuncNameAsType ( funcname ) ;
2001-10-05 00:06:46 +02:00
2007-11-11 20:22:49 +01:00
if ( OidIsValid ( targetType ) )
2001-10-05 00:06:46 +02:00
{
Oid sourceType = argtypes [ 0 ] ;
2004-05-26 06:41:50 +02:00
Node * arg1 = linitial ( fargs ) ;
2007-06-05 23:31:09 +02:00
bool iscoercion ;
if ( sourceType = = UNKNOWNOID & & IsA ( arg1 , Const ) )
{
/* always treat typename('literal') as coercion */
iscoercion = true ;
}
else
{
CoercionPathType cpathtype ;
Oid cfuncid ;
cpathtype = find_coercion_pathway ( targetType , sourceType ,
COERCION_EXPLICIT ,
& cfuncid ) ;
2010-11-07 19:03:19 +01:00
switch ( cpathtype )
{
case COERCION_PATH_RELABELTYPE :
iscoercion = true ;
break ;
case COERCION_PATH_COERCEVIAIO :
if ( ( sourceType = = RECORDOID | |
ISCOMPLEX ( sourceType ) ) & &
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
TypeCategory ( targetType ) = = TYPCATEGORY_STRING )
2010-11-07 19:03:19 +01:00
iscoercion = false ;
else
iscoercion = true ;
break ;
default :
iscoercion = false ;
break ;
}
2007-06-05 23:31:09 +02:00
}
if ( iscoercion )
2001-10-05 00:06:46 +02:00
{
2007-06-05 23:31:09 +02:00
/* Treat it as a type coercion */
2001-10-05 00:06:46 +02:00
* funcid = InvalidOid ;
* rettype = targetType ;
* retset = false ;
2008-07-16 03:30:23 +02:00
* nvargs = 0 ;
2014-04-03 22:57:45 +02:00
* vatype = InvalidOid ;
2001-10-05 00:06:46 +02:00
* true_typeids = argtypes ;
return FUNCDETAIL_COERCION ;
}
}
}
2000-04-12 19:17:23 +02:00
/*
2005-10-15 04:49:52 +02:00
* didn ' t find an exact match , so now try to match up candidates . . .
2000-04-12 19:17:23 +02:00
*/
2003-05-26 02:11:29 +02:00
if ( raw_candidates ! = NULL )
1997-11-25 23:07:18 +01:00
{
2005-04-24 00:09:58 +02:00
FuncCandidateList current_candidates ;
int ncandidates ;
ncandidates = func_match_argtypes ( nargs ,
argtypes ,
raw_candidates ,
& current_candidates ) ;
/* one match only? then run with it... */
if ( ncandidates = = 1 )
best_candidate = current_candidates ;
1997-11-25 23:07:18 +01:00
2000-03-19 01:19:39 +01:00
/*
2005-04-24 00:09:58 +02:00
* multiple candidates ? then better decide or throw an error . . .
2000-03-19 01:19:39 +01:00
*/
2005-04-24 00:09:58 +02:00
else if ( ncandidates > 1 )
1997-11-25 23:07:18 +01:00
{
2005-04-24 00:09:58 +02:00
best_candidate = func_select_candidate ( nargs ,
argtypes ,
current_candidates ) ;
1998-05-10 01:31:34 +02:00
1998-09-01 06:40:42 +02:00
/*
2005-10-15 04:49:52 +02:00
* If we were able to choose a best candidate , we ' re done .
* Otherwise , ambiguous function call .
1998-09-01 06:40:42 +02:00
*/
2005-04-24 00:09:58 +02:00
if ( ! best_candidate )
2003-07-04 04:51:34 +02:00
return FUNCDETAIL_MULTIPLE ;
1997-11-25 23:07:18 +01:00
}
}
}
2002-04-06 08:59:25 +02:00
if ( best_candidate )
1997-11-25 23:07:18 +01:00
{
2002-04-06 08:59:25 +02:00
HeapTuple ftup ;
Form_pg_proc pform ;
2002-04-11 22:00:18 +02:00
FuncDetailCode result ;
2002-04-06 08:59:25 +02:00
2008-12-18 19:20:35 +01:00
/*
2009-10-08 04:39:25 +02:00
* If processing named args or expanding variadics or defaults , the
* " best candidate " might represent multiple equivalently good
* functions ; treat this case as ambiguous .
2008-12-18 19:20:35 +01:00
*/
if ( ! OidIsValid ( best_candidate - > oid ) )
return FUNCDETAIL_MULTIPLE ;
2009-10-08 04:39:25 +02:00
/*
2010-02-26 03:01:40 +01:00
* We disallow VARIADIC with named arguments unless the last argument
* ( the one with VARIADIC attached ) actually matched the variadic
* parameter . This is mere pedantry , really , but some folks insisted .
2009-10-08 04:39:25 +02:00
*/
if ( fargnames ! = NIL & & ! expand_variadic & & nargs > 0 & &
best_candidate - > argnumbers [ nargs - 1 ] ! = nargs - 1 )
return FUNCDETAIL_NOTFOUND ;
2002-04-06 08:59:25 +02:00
* funcid = best_candidate - > oid ;
2008-07-16 03:30:23 +02:00
* nvargs = best_candidate - > nvargs ;
2002-04-06 08:59:25 +02:00
* true_typeids = best_candidate - > args ;
2009-10-08 04:39:25 +02:00
/*
* If processing named args , return actual argument positions into
* NamedArgExpr nodes in the fargs list . This is a bit ugly but not
* worth the extra notation needed to do it differently .
*/
if ( best_candidate - > argnumbers ! = NULL )
{
int i = 0 ;
ListCell * lc ;
foreach ( lc , fargs )
{
NamedArgExpr * na = ( NamedArgExpr * ) lfirst ( lc ) ;
if ( IsA ( na , NamedArgExpr ) )
na - > argnumber = best_candidate - > argnumbers [ i ] ;
i + + ;
}
}
2010-02-14 19:42:19 +01:00
ftup = SearchSysCache1 ( PROCOID ,
ObjectIdGetDatum ( best_candidate - > oid ) ) ;
2002-09-04 22:31:48 +02:00
if ( ! HeapTupleIsValid ( ftup ) ) /* should not happen */
2003-07-19 01:20:33 +02:00
elog ( ERROR , " cache lookup failed for function %u " ,
2003-07-04 04:51:34 +02:00
best_candidate - > oid ) ;
2002-04-06 08:59:25 +02:00
pform = ( Form_pg_proc ) GETSTRUCT ( ftup ) ;
1997-11-25 23:07:18 +01:00
* rettype = pform - > prorettype ;
* retset = pform - > proretset ;
2013-07-18 17:52:12 +02:00
* vatype = pform - > provariadic ;
2008-12-18 19:20:35 +01:00
/* fetch default args if caller wants 'em */
2009-10-08 04:39:25 +02:00
if ( argdefaults & & best_candidate - > ndargs > 0 )
2008-12-18 19:20:35 +01:00
{
2009-10-08 04:39:25 +02:00
Datum proargdefaults ;
bool isnull ;
char * str ;
List * defaults ;
/* shouldn't happen, FuncnameGetCandidates messed up */
if ( best_candidate - > ndargs > pform - > pronargdefaults )
elog ( ERROR , " not enough default arguments " ) ;
proargdefaults = SysCacheGetAttr ( PROCOID , ftup ,
Anum_pg_proc_proargdefaults ,
& isnull ) ;
Assert ( ! isnull ) ;
str = TextDatumGetCString ( proargdefaults ) ;
2017-02-21 17:33:07 +01:00
defaults = castNode ( List , stringToNode ( str ) ) ;
2009-10-08 04:39:25 +02:00
pfree ( str ) ;
/* Delete any unused defaults from the returned list */
if ( best_candidate - > argnumbers ! = NULL )
{
/*
* This is a bit tricky in named notation , since the supplied
2014-05-06 18:12:18 +02:00
* arguments could replace any subset of the defaults . We
2009-10-08 04:39:25 +02:00
* work by making a bitmapset of the argnumbers of defaulted
* arguments , then scanning the defaults list and selecting
* the needed items . ( This assumes that defaulted arguments
* should be supplied in their positional order . )
*/
2010-02-26 03:01:40 +01:00
Bitmapset * defargnumbers ;
int * firstdefarg ;
List * newdefaults ;
ListCell * lc ;
int i ;
2009-10-08 04:39:25 +02:00
defargnumbers = NULL ;
firstdefarg = & best_candidate - > argnumbers [ best_candidate - > nargs - best_candidate - > ndargs ] ;
for ( i = 0 ; i < best_candidate - > ndargs ; i + + )
defargnumbers = bms_add_member ( defargnumbers ,
firstdefarg [ i ] ) ;
newdefaults = NIL ;
i = pform - > pronargs - pform - > pronargdefaults ;
foreach ( lc , defaults )
{
if ( bms_is_member ( i , defargnumbers ) )
newdefaults = lappend ( newdefaults , lfirst ( lc ) ) ;
i + + ;
}
Assert ( list_length ( newdefaults ) = = best_candidate - > ndargs ) ;
bms_free ( defargnumbers ) ;
* argdefaults = newdefaults ;
}
else
2008-12-18 19:20:35 +01:00
{
2009-10-08 04:39:25 +02:00
/*
2010-02-26 03:01:40 +01:00
* Defaults for positional notation are lots easier ; just
* remove any unwanted ones from the front .
2009-10-08 04:39:25 +02:00
*/
2008-12-18 19:20:35 +01:00
int ndelete ;
ndelete = list_length ( defaults ) - best_candidate - > ndargs ;
while ( ndelete - - > 0 )
defaults = list_delete_first ( defaults ) ;
* argdefaults = defaults ;
}
}
2008-12-28 19:54:01 +01:00
if ( pform - > proisagg )
result = FUNCDETAIL_AGGREGATE ;
else if ( pform - > proiswindow )
result = FUNCDETAIL_WINDOWFUNC ;
2017-11-30 14:46:13 +01:00
else if ( pform - > prorettype = = InvalidOid )
result = FUNCDETAIL_PROCEDURE ;
2008-12-28 19:54:01 +01:00
else
result = FUNCDETAIL_NORMAL ;
2000-11-16 23:30:52 +01:00
ReleaseSysCache ( ftup ) ;
2002-04-11 22:00:18 +02:00
return result ;
1997-11-25 23:07:18 +01:00
}
2001-10-05 00:06:46 +02:00
return FUNCDETAIL_NOTFOUND ;
2003-07-04 04:51:34 +02:00
}
1997-11-25 23:07:18 +01:00
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
/*
* unify_hypothetical_args ( )
*
* Ensure that each hypothetical direct argument of a hypothetical - set
* aggregate has the same type as the corresponding aggregated argument .
* Modify the expressions in the fargs list , if necessary , and update
* actual_arg_types [ ] .
*
* If the agg declared its args non - ANY ( even ANYELEMENT ) , we need only a
* sanity check that the declared types match ; make_fn_arguments will coerce
* the actual arguments to match the declared ones . But if the declaration
* is ANY , nothing will happen in make_fn_arguments , so we need to fix any
* mismatch here . We use the same type resolution logic as UNION etc .
*/
static void
unify_hypothetical_args ( ParseState * pstate ,
List * fargs ,
int numAggregatedArgs ,
Oid * actual_arg_types ,
Oid * declared_arg_types )
{
Node * args [ FUNC_MAX_ARGS ] ;
int numDirectArgs ,
numNonHypotheticalArgs ;
int i ;
ListCell * lc ;
numDirectArgs = list_length ( fargs ) - numAggregatedArgs ;
numNonHypotheticalArgs = numDirectArgs - numAggregatedArgs ;
/* safety check (should only trigger with a misdeclared agg) */
if ( numNonHypotheticalArgs < 0 )
elog ( ERROR , " incorrect number of arguments to hypothetical-set aggregate " ) ;
/* Deconstruct fargs into an array for ease of subscripting */
i = 0 ;
foreach ( lc , fargs )
{
args [ i + + ] = ( Node * ) lfirst ( lc ) ;
}
/* Check each hypothetical arg and corresponding aggregated arg */
for ( i = numNonHypotheticalArgs ; i < numDirectArgs ; i + + )
{
int aargpos = numDirectArgs + ( i - numNonHypotheticalArgs ) ;
Oid commontype ;
/* A mismatch means AggregateCreate didn't check properly ... */
if ( declared_arg_types [ i ] ! = declared_arg_types [ aargpos ] )
elog ( ERROR , " hypothetical-set aggregate has inconsistent declared argument types " ) ;
/* No need to unify if make_fn_arguments will coerce */
if ( declared_arg_types [ i ] ! = ANYOID )
continue ;
/*
* Select common type , giving preference to the aggregated argument ' s
* type ( we ' d rather coerce the direct argument once than coerce all
* the aggregated values ) .
*/
commontype = select_common_type ( pstate ,
list_make2 ( args [ aargpos ] , args [ i ] ) ,
" WITHIN GROUP " ,
NULL ) ;
/*
* Perform the coercions . We don ' t need to worry about NamedArgExprs
* here because they aren ' t supported with aggregates .
*/
args [ i ] = coerce_type ( pstate ,
args [ i ] ,
actual_arg_types [ i ] ,
commontype , - 1 ,
COERCION_IMPLICIT ,
COERCE_IMPLICIT_CAST ,
- 1 ) ;
actual_arg_types [ i ] = commontype ;
args [ aargpos ] = coerce_type ( pstate ,
args [ aargpos ] ,
actual_arg_types [ aargpos ] ,
commontype , - 1 ,
COERCION_IMPLICIT ,
COERCE_IMPLICIT_CAST ,
- 1 ) ;
actual_arg_types [ aargpos ] = commontype ;
}
/* Reconstruct fargs from array */
i = 0 ;
foreach ( lc , fargs )
{
lfirst ( lc ) = args [ i + + ] ;
}
}
2003-04-09 01:20:04 +02:00
/*
* make_fn_arguments ( )
*
* Given the actual argument expressions for a function , and the desired
* input types for the function , add any necessary typecasting to the
* expression tree . Caller should already have verified that casting is
* allowed .
*
* Caution : given argument list is modified in - place .
2003-04-30 00:13:11 +02:00
*
* As with coerce_type , pstate may be NULL if no special unknown - Param
* processing is wanted .
1997-11-25 23:07:18 +01:00
*/
2003-04-09 01:20:04 +02:00
void
2003-04-30 00:13:11 +02:00
make_fn_arguments ( ParseState * pstate ,
List * fargs ,
2003-04-09 01:20:04 +02:00
Oid * actual_arg_types ,
Oid * declared_arg_types )
1997-11-25 23:07:18 +01:00
{
2004-05-26 06:41:50 +02:00
ListCell * current_fargs ;
2003-04-09 01:20:04 +02:00
int i = 0 ;
1997-11-25 23:07:18 +01:00
2003-04-09 01:20:04 +02:00
foreach ( current_fargs , fargs )
1997-11-25 23:07:18 +01:00
{
2000-02-20 22:32:16 +01:00
/* types don't match? then force coercion using a function call... */
2003-04-09 01:20:04 +02:00
if ( actual_arg_types [ i ] ! = declared_arg_types [ i ] )
1998-05-10 01:31:34 +02:00
{
2010-02-26 03:01:40 +01:00
Node * node = ( Node * ) lfirst ( current_fargs ) ;
2009-10-08 04:39:25 +02:00
/*
2010-02-26 03:01:40 +01:00
* If arg is a NamedArgExpr , coerce its input expr instead - - - we
* want the NamedArgExpr to stay at the top level of the list .
2009-10-08 04:39:25 +02:00
*/
if ( IsA ( node , NamedArgExpr ) )
{
NamedArgExpr * na = ( NamedArgExpr * ) node ;
node = coerce_type ( pstate ,
( Node * ) na - > arg ,
actual_arg_types [ i ] ,
declared_arg_types [ i ] , - 1 ,
COERCION_IMPLICIT ,
COERCE_IMPLICIT_CAST ,
- 1 ) ;
na - > arg = ( Expr * ) node ;
}
else
{
node = coerce_type ( pstate ,
node ,
actual_arg_types [ i ] ,
declared_arg_types [ i ] , - 1 ,
COERCION_IMPLICIT ,
COERCE_IMPLICIT_CAST ,
- 1 ) ;
lfirst ( current_fargs ) = node ;
}
1998-05-10 01:31:34 +02:00
}
2003-04-09 01:20:04 +02:00
i + + ;
1997-11-25 23:07:18 +01:00
}
}
2007-11-11 20:22:49 +01:00
/*
* FuncNameAsType -
* convenience routine to see if a function name matches a type name
*
* Returns the OID of the matching type , or InvalidOid if none . We ignore
* shell types and complex types .
*/
static Oid
FuncNameAsType ( List * funcname )
{
Oid result ;
Type typtup ;
2014-01-23 18:40:29 +01:00
typtup = LookupTypeName ( NULL , makeTypeNameFromNameList ( funcname ) , NULL , false ) ;
2007-11-11 20:22:49 +01:00
if ( typtup = = NULL )
return InvalidOid ;
if ( ( ( Form_pg_type ) GETSTRUCT ( typtup ) ) - > typisdefined & &
! OidIsValid ( typeTypeRelid ( typtup ) ) )
result = typeTypeId ( typtup ) ;
else
result = InvalidOid ;
ReleaseSysCache ( typtup ) ;
return result ;
}
1997-11-25 23:07:18 +01:00
/*
* ParseComplexProjection -
* handles function calls with a single argument that is of complex type .
2002-03-21 17:02:16 +01:00
* If the function call is actually a column projection , return a suitably
2014-05-06 18:12:18 +02:00
* transformed expression tree . If not , return NULL .
1997-11-25 23:07:18 +01:00
*/
1997-11-26 04:43:18 +01:00
static Node *
2017-10-31 15:34:31 +01:00
ParseComplexProjection ( ParseState * pstate , const char * funcname , Node * first_arg ,
2006-03-14 23:48:25 +01:00
int location )
1997-11-25 23:07:18 +01:00
{
2005-04-01 00:46:33 +02:00
TupleDesc tupdesc ;
int i ;
2002-03-21 17:02:16 +01:00
/*
2005-10-15 04:49:52 +02:00
* Special case for whole - row Vars so that we can resolve ( foo . * ) . bar even
* when foo is a reference to a subselect , join , or RECORD function . A
* bonus is that we avoid generating an unnecessary FieldSelect ; our
* result can omit the whole - row Var and just be a Var for the selected
* field .
2005-05-31 03:03:23 +02:00
*
2005-10-15 04:49:52 +02:00
* This case could be handled by expandRecordVariable , but it ' s more
* efficient to do it this way when possible .
2002-03-21 17:02:16 +01:00
*/
2004-04-02 21:07:02 +02:00
if ( IsA ( first_arg , Var ) & &
( ( Var * ) first_arg ) - > varattno = = InvalidAttrNumber )
1997-11-25 23:07:18 +01:00
{
2004-04-02 21:07:02 +02:00
RangeTblEntry * rte ;
1997-11-25 23:07:18 +01:00
2004-04-02 21:07:02 +02:00
rte = GetRTEByRangeTablePosn ( pstate ,
( ( Var * ) first_arg ) - > varno ,
( ( Var * ) first_arg ) - > varlevelsup ) ;
/* Return a Var if funcname matches a column, else NULL */
2015-03-11 15:44:04 +01:00
return scanRTEForColumn ( pstate , rte , funcname , location , 0 , NULL ) ;
2004-04-02 21:07:02 +02:00
}
1997-11-25 23:07:18 +01:00
2004-04-02 21:07:02 +02:00
/*
2017-10-26 19:47:45 +02:00
* Else do it the hard way with get_expr_result_tupdesc ( ) .
2005-05-31 03:03:23 +02:00
*
2005-11-22 19:17:34 +01:00
* If it ' s a Var of type RECORD , we have to work even harder : we have to
2017-10-26 19:47:45 +02:00
* find what the Var refers to , and pass that to get_expr_result_tupdesc .
2005-11-22 19:17:34 +01:00
* That task is handled by expandRecordVariable ( ) .
2004-04-02 21:07:02 +02:00
*/
2005-05-31 03:03:23 +02:00
if ( IsA ( first_arg , Var ) & &
( ( Var * ) first_arg ) - > vartype = = RECORDOID )
tupdesc = expandRecordVariable ( pstate , ( Var * ) first_arg , 0 ) ;
2017-10-26 19:47:45 +02:00
else
tupdesc = get_expr_result_tupdesc ( first_arg , true ) ;
if ( ! tupdesc )
2005-04-01 00:46:33 +02:00
return NULL ; /* unresolvable RECORD type */
for ( i = 0 ; i < tupdesc - > natts ; i + + )
{
2017-08-20 20:19:07 +02:00
Form_pg_attribute att = TupleDescAttr ( tupdesc , i ) ;
2005-04-01 00:46:33 +02:00
if ( strcmp ( funcname , NameStr ( att - > attname ) ) = = 0 & &
! att - > attisdropped )
{
/* Success, so generate a FieldSelect expression */
FieldSelect * fselect = makeNode ( FieldSelect ) ;
fselect - > arg = ( Expr * ) first_arg ;
fselect - > fieldnum = i + 1 ;
fselect - > resulttype = att - > atttypid ;
fselect - > resulttypmod = att - > atttypmod ;
2011-04-09 20:40:09 +02:00
/* save attribute's collation for parse_collate.c */
2011-03-20 01:29:08 +01:00
fselect - > resultcollid = att - > attcollation ;
2005-04-01 00:46:33 +02:00
return ( Node * ) fselect ;
}
}
return NULL ; /* funcname does not match any column */
1997-11-25 23:07:18 +01:00
}
/*
2003-07-20 23:56:35 +02:00
* funcname_signature_string
2003-07-04 04:51:34 +02:00
* Build a string representing a function name , including arg types .
* The result is something like " foo(integer) " .
*
2009-10-08 04:39:25 +02:00
* If argnames isn ' t NIL , it is a list of C strings representing the actual
2014-05-06 18:12:18 +02:00
* arg names for the last N arguments . This must be considered part of the
2009-10-08 04:39:25 +02:00
* function signature too , when dealing with named - notation function calls .
*
2003-07-04 04:51:34 +02:00
* This is typically used in the construction of function - not - found error
* messages .
1997-11-25 23:07:18 +01:00
*/
2003-07-04 04:51:34 +02:00
const char *
2009-10-08 04:39:25 +02:00
funcname_signature_string ( const char * funcname , int nargs ,
List * argnames , const Oid * argtypes )
1997-11-25 23:07:18 +01:00
{
2002-05-18 00:35:13 +02:00
StringInfoData argbuf ;
2009-10-08 04:39:25 +02:00
int numposargs ;
ListCell * lc ;
1997-11-25 23:07:18 +01:00
int i ;
2002-05-18 00:35:13 +02:00
initStringInfo ( & argbuf ) ;
2003-07-20 23:56:35 +02:00
appendStringInfo ( & argbuf , " %s( " , funcname ) ;
2003-07-04 04:51:34 +02:00
2009-10-08 04:39:25 +02:00
numposargs = nargs - list_length ( argnames ) ;
lc = list_head ( argnames ) ;
1997-11-25 23:07:18 +01:00
for ( i = 0 ; i < nargs ; i + + )
{
if ( i )
2003-04-24 23:16:45 +02:00
appendStringInfoString ( & argbuf , " , " ) ;
2009-10-08 04:39:25 +02:00
if ( i > = numposargs )
{
2015-05-01 15:37:10 +02:00
appendStringInfo ( & argbuf , " %s => " , ( char * ) lfirst ( lc ) ) ;
2009-10-08 04:39:25 +02:00
lc = lnext ( lc ) ;
}
2010-05-30 20:10:41 +02:00
appendStringInfoString ( & argbuf , format_type_be ( argtypes [ i ] ) ) ;
1997-11-25 23:07:18 +01:00
}
2003-07-04 04:51:34 +02:00
appendStringInfoChar ( & argbuf , ' ) ' ) ;
return argbuf . data ; /* return palloc'd string buffer */
1997-11-25 23:07:18 +01:00
}
2002-04-09 22:35:55 +02:00
2003-07-20 23:56:35 +02:00
/*
* func_signature_string
* As above , but function name is passed as a qualified name list .
*/
const char *
2009-10-08 04:39:25 +02:00
func_signature_string ( List * funcname , int nargs ,
List * argnames , const Oid * argtypes )
2003-07-20 23:56:35 +02:00
{
return funcname_signature_string ( NameListToString ( funcname ) ,
2009-10-08 04:39:25 +02:00
nargs , argnames , argtypes ) ;
2003-07-20 23:56:35 +02:00
}
2002-04-09 22:35:55 +02:00
/*
* LookupFuncName
2017-03-10 05:58:48 +01:00
*
* Given a possibly - qualified function name and optionally a set of argument
* types , look up the function . Pass nargs = = - 1 to indicate that no argument
* types are specified .
2002-04-09 22:35:55 +02:00
*
* If the function name is not schema - qualified , it is sought in the current
* namespace search path .
2003-07-04 04:51:34 +02:00
*
* If the function is not found , we return InvalidOid if noError is true ,
* else raise an error .
2002-04-09 22:35:55 +02:00
*/
Oid
2003-07-04 04:51:34 +02:00
LookupFuncName ( List * funcname , int nargs , const Oid * argtypes , bool noError )
2002-04-09 22:35:55 +02:00
{
FuncCandidateList clist ;
2015-06-27 23:47:39 +02:00
/* Passing NULL for argtypes is no longer allowed */
Assert ( argtypes ) ;
2014-01-23 18:40:29 +01:00
clist = FuncnameGetCandidates ( funcname , nargs , NIL , false , false , noError ) ;
2002-04-09 22:35:55 +02:00
2017-03-10 05:58:48 +01:00
/*
* If no arguments were specified , the name must yield a unique candidate .
*/
if ( nargs = = - 1 )
{
if ( clist )
{
if ( clist - > next )
{
if ( ! noError )
ereport ( ERROR ,
( errcode ( ERRCODE_AMBIGUOUS_FUNCTION ) ,
errmsg ( " function name \" %s \" is not unique " ,
NameListToString ( funcname ) ) ,
errhint ( " Specify the argument list to select the function unambiguously. " ) ) ) ;
}
else
return clist - > oid ;
}
else
{
if ( ! noError )
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " could not find a function named \" %s \" " ,
NameListToString ( funcname ) ) ) ) ;
}
}
2002-04-09 22:35:55 +02:00
while ( clist )
{
if ( memcmp ( argtypes , clist - > args , nargs * sizeof ( Oid ) ) = = 0 )
return clist - > oid ;
clist = clist - > next ;
}
2003-07-04 04:51:34 +02:00
if ( ! noError )
2003-07-19 01:20:33 +02:00
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " function %s does not exist " ,
2009-10-08 04:39:25 +02:00
func_signature_string ( funcname , nargs ,
NIL , argtypes ) ) ) ) ;
2003-07-04 04:51:34 +02:00
2002-04-09 22:35:55 +02:00
return InvalidOid ;
}
/*
2016-12-28 18:00:00 +01:00
* LookupFuncWithArgs
2017-11-30 14:46:13 +01:00
*
* Like LookupFuncName , but the argument types are specified by a
* ObjectWithArgs node . Also , this function can check whether the result is a
* function , procedure , or aggregate , based on the objtype argument . Pass
* OBJECT_ROUTINE to accept any of them .
*
* For historical reasons , we also accept aggregates when looking for a
* function .
2002-04-09 22:35:55 +02:00
*/
Oid
2017-11-30 14:46:13 +01:00
LookupFuncWithArgs ( ObjectType objtype , ObjectWithArgs * func , bool noError )
2002-04-09 22:35:55 +02:00
{
2002-09-04 22:31:48 +02:00
Oid argoids [ FUNC_MAX_ARGS ] ;
int argcount ;
int i ;
2004-05-26 06:41:50 +02:00
ListCell * args_item ;
2017-11-30 14:46:13 +01:00
Oid oid ;
Assert ( objtype = = OBJECT_AGGREGATE | |
objtype = = OBJECT_FUNCTION | |
objtype = = OBJECT_PROCEDURE | |
objtype = = OBJECT_ROUTINE ) ;
2002-04-09 22:35:55 +02:00
2016-12-28 18:00:00 +01:00
argcount = list_length ( func - > objargs ) ;
2002-04-09 22:35:55 +02:00
if ( argcount > FUNC_MAX_ARGS )
2003-07-19 01:20:33 +02:00
ereport ( ERROR ,
( errcode ( ERRCODE_TOO_MANY_ARGUMENTS ) ,
2009-06-04 20:33:08 +02:00
errmsg_plural ( " functions cannot have more than %d argument " ,
" functions cannot have more than %d arguments " ,
FUNC_MAX_ARGS ,
FUNC_MAX_ARGS ) ) ) ;
2002-04-09 22:35:55 +02:00
2016-12-28 18:00:00 +01:00
args_item = list_head ( func - > objargs ) ;
2002-04-09 22:35:55 +02:00
for ( i = 0 ; i < argcount ; i + + )
{
2004-05-26 06:41:50 +02:00
TypeName * t = ( TypeName * ) lfirst ( args_item ) ;
2002-04-09 22:35:55 +02:00
2014-01-23 18:40:29 +01:00
argoids [ i ] = LookupTypeNameOid ( NULL , t , noError ) ;
2004-05-26 06:41:50 +02:00
args_item = lnext ( args_item ) ;
2002-04-09 22:35:55 +02:00
}
2017-11-30 14:46:13 +01:00
/*
* When looking for a function or routine , we pass noError through to
* LookupFuncName and let it make any error messages . Otherwise , we make
* our own errors for the aggregate and procedure cases .
*/
oid = LookupFuncName ( func - > objname , func - > args_unspecified ? - 1 : argcount , argoids ,
( objtype = = OBJECT_FUNCTION | | objtype = = OBJECT_ROUTINE ) ? noError : true ) ;
2006-04-15 19:45:46 +02:00
2017-11-30 14:46:13 +01:00
if ( objtype = = OBJECT_FUNCTION )
2006-04-15 19:45:46 +02:00
{
2017-11-30 14:46:13 +01:00
/* Make sure it's a function, not a procedure */
if ( oid & & get_func_rettype ( oid ) = = InvalidOid )
{
if ( noError )
return InvalidOid ;
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
errmsg ( " %s is not a function " ,
func_signature_string ( func - > objname , argcount ,
NIL , argoids ) ) ) ) ;
}
2006-04-15 19:45:46 +02:00
}
2017-11-30 14:46:13 +01:00
else if ( objtype = = OBJECT_PROCEDURE )
2006-04-15 19:45:46 +02:00
{
2017-11-30 14:46:13 +01:00
if ( ! OidIsValid ( oid ) )
{
if ( noError )
return InvalidOid ;
else if ( func - > args_unspecified )
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " could not find a procedure named \" %s \" " ,
NameListToString ( func - > objname ) ) ) ) ;
else
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " procedure %s does not exist " ,
func_signature_string ( func - > objname , argcount ,
NIL , argoids ) ) ) ) ;
}
/* Make sure it's a procedure */
if ( get_func_rettype ( oid ) ! = InvalidOid )
{
if ( noError )
return InvalidOid ;
2006-04-15 19:45:46 +02:00
ereport ( ERROR ,
2017-11-30 14:46:13 +01:00
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
errmsg ( " %s is not a procedure " ,
func_signature_string ( func - > objname , argcount ,
2009-10-08 04:39:25 +02:00
NIL , argoids ) ) ) ) ;
2017-11-30 14:46:13 +01:00
}
2006-04-15 19:45:46 +02:00
}
2017-11-30 14:46:13 +01:00
else if ( objtype = = OBJECT_AGGREGATE )
2006-04-15 19:45:46 +02:00
{
2017-11-30 14:46:13 +01:00
if ( ! OidIsValid ( oid ) )
{
if ( noError )
return InvalidOid ;
else if ( func - > args_unspecified )
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " could not find a aggregate named \" %s \" " ,
NameListToString ( func - > objname ) ) ) ) ;
else if ( argcount = = 0 )
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " aggregate %s(*) does not exist " ,
NameListToString ( func - > objname ) ) ) ) ;
else
ereport ( ERROR ,
( errcode ( ERRCODE_UNDEFINED_FUNCTION ) ,
errmsg ( " aggregate %s does not exist " ,
func_signature_string ( func - > objname , argcount ,
NIL , argoids ) ) ) ) ;
}
2006-04-15 19:45:46 +02:00
2017-11-30 14:46:13 +01:00
/* Make sure it's an aggregate */
if ( ! get_func_isagg ( oid ) )
{
if ( noError )
return InvalidOid ;
/* we do not use the (*) notation for functions... */
ereport ( ERROR ,
( errcode ( ERRCODE_WRONG_OBJECT_TYPE ) ,
errmsg ( " function %s is not an aggregate " ,
func_signature_string ( func - > objname , argcount ,
NIL , argoids ) ) ) ) ;
}
}
2006-04-15 19:45:46 +02:00
return oid ;
}
2016-09-13 19:54:24 +02:00
/*
* check_srf_call_placement
* Verify that a set - returning function is called in a valid place ,
* and throw a nice error if not .
*
* A side - effect is to set pstate - > p_hasTargetSRFs true if appropriate .
Disallow set-returning functions inside CASE or COALESCE.
When we reimplemented SRFs in commit 69f4b9c85, our initial choice was
to allow the behavior to vary from historical practice in cases where a
SRF call appeared within a conditional-execution construct (currently,
only CASE or COALESCE). But that was controversial to begin with, and
subsequent discussion has resulted in a consensus that it's better to
throw an error instead of executing the query differently from before,
so long as we can provide a reasonably clear error message and a way to
rewrite the query.
Hence, add a parser mechanism to allow detection of such cases during
parse analysis. The mechanism just requires storing, in the ParseState,
a pointer to the set-returning FuncExpr or OpExpr most recently emitted
by parse analysis. Then the parsing functions for CASE and COALESCE can
detect the presence of a SRF in their arguments by noting whether this
pointer changes while analyzing their arguments. Furthermore, if it does,
it provides a suitable error cursor location for the complaint. (This
means that if there's more than one SRF in the arguments, the error will
point at the last one to be analyzed not the first. While connoisseurs of
parsing behavior might find that odd, it's unlikely the average user would
ever notice.)
While at it, we can also provide more specific error messages than before
about some pre-existing restrictions, such as no-SRFs-within-aggregates.
Also, reject at parse time cases where a NULLIF or IS DISTINCT FROM
construct would need to return a set. We've never supported that, but the
restriction is depended on in more subtle ways now, so it seems wise to
detect it at the start.
Also, provide some documentation about how to rewrite a SRF-within-CASE
query using a custom wrapper SRF.
It turns out that the information_schema.user_mapping_options view
contained an instance of exactly the behavior we're now forbidding; but
rewriting it makes it more clear and safer too.
initdb forced because of user_mapping_options change.
Patch by me, with error message suggestions from Alvaro Herrera and
Andres Freund, pursuant to a complaint from Regina Obe.
Discussion: https://postgr.es/m/000001d2d5de$d8d66170$8a832450$@pcorp.us
2017-06-14 05:46:39 +02:00
*
* last_srf should be a copy of pstate - > p_last_srf from just before we
* started transforming the function ' s arguments . This allows detection
* of whether the SRF ' s arguments contain any SRFs .
2016-09-13 19:54:24 +02:00
*/
void
Disallow set-returning functions inside CASE or COALESCE.
When we reimplemented SRFs in commit 69f4b9c85, our initial choice was
to allow the behavior to vary from historical practice in cases where a
SRF call appeared within a conditional-execution construct (currently,
only CASE or COALESCE). But that was controversial to begin with, and
subsequent discussion has resulted in a consensus that it's better to
throw an error instead of executing the query differently from before,
so long as we can provide a reasonably clear error message and a way to
rewrite the query.
Hence, add a parser mechanism to allow detection of such cases during
parse analysis. The mechanism just requires storing, in the ParseState,
a pointer to the set-returning FuncExpr or OpExpr most recently emitted
by parse analysis. Then the parsing functions for CASE and COALESCE can
detect the presence of a SRF in their arguments by noting whether this
pointer changes while analyzing their arguments. Furthermore, if it does,
it provides a suitable error cursor location for the complaint. (This
means that if there's more than one SRF in the arguments, the error will
point at the last one to be analyzed not the first. While connoisseurs of
parsing behavior might find that odd, it's unlikely the average user would
ever notice.)
While at it, we can also provide more specific error messages than before
about some pre-existing restrictions, such as no-SRFs-within-aggregates.
Also, reject at parse time cases where a NULLIF or IS DISTINCT FROM
construct would need to return a set. We've never supported that, but the
restriction is depended on in more subtle ways now, so it seems wise to
detect it at the start.
Also, provide some documentation about how to rewrite a SRF-within-CASE
query using a custom wrapper SRF.
It turns out that the information_schema.user_mapping_options view
contained an instance of exactly the behavior we're now forbidding; but
rewriting it makes it more clear and safer too.
initdb forced because of user_mapping_options change.
Patch by me, with error message suggestions from Alvaro Herrera and
Andres Freund, pursuant to a complaint from Regina Obe.
Discussion: https://postgr.es/m/000001d2d5de$d8d66170$8a832450$@pcorp.us
2017-06-14 05:46:39 +02:00
check_srf_call_placement ( ParseState * pstate , Node * last_srf , int location )
2016-09-13 19:54:24 +02:00
{
const char * err ;
bool errkind ;
/*
* Check to see if the set - returning function is in an invalid place
* within the query . Basically , we don ' t allow SRFs anywhere except in
* the targetlist ( which includes GROUP BY / ORDER BY expressions ) , VALUES ,
* and functions in FROM .
*
* For brevity we support two schemes for reporting an error here : set
* " err " to a custom message , or set " errkind " true if the error context
* is sufficiently identified by what ParseExprKindName will return , * and *
* what it will return is just a SQL keyword . ( Otherwise , use a custom
* message to avoid creating translation problems . )
*/
err = NULL ;
errkind = false ;
switch ( pstate - > p_expr_kind )
{
case EXPR_KIND_NONE :
Assert ( false ) ; /* can't happen */
break ;
case EXPR_KIND_OTHER :
/* Accept SRF here; caller must throw error if wanted */
break ;
case EXPR_KIND_JOIN_ON :
case EXPR_KIND_JOIN_USING :
err = _ ( " set-returning functions are not allowed in JOIN conditions " ) ;
break ;
case EXPR_KIND_FROM_SUBSELECT :
/* can't get here, but just in case, throw an error */
errkind = true ;
break ;
case EXPR_KIND_FROM_FUNCTION :
Disallow set-returning functions inside CASE or COALESCE.
When we reimplemented SRFs in commit 69f4b9c85, our initial choice was
to allow the behavior to vary from historical practice in cases where a
SRF call appeared within a conditional-execution construct (currently,
only CASE or COALESCE). But that was controversial to begin with, and
subsequent discussion has resulted in a consensus that it's better to
throw an error instead of executing the query differently from before,
so long as we can provide a reasonably clear error message and a way to
rewrite the query.
Hence, add a parser mechanism to allow detection of such cases during
parse analysis. The mechanism just requires storing, in the ParseState,
a pointer to the set-returning FuncExpr or OpExpr most recently emitted
by parse analysis. Then the parsing functions for CASE and COALESCE can
detect the presence of a SRF in their arguments by noting whether this
pointer changes while analyzing their arguments. Furthermore, if it does,
it provides a suitable error cursor location for the complaint. (This
means that if there's more than one SRF in the arguments, the error will
point at the last one to be analyzed not the first. While connoisseurs of
parsing behavior might find that odd, it's unlikely the average user would
ever notice.)
While at it, we can also provide more specific error messages than before
about some pre-existing restrictions, such as no-SRFs-within-aggregates.
Also, reject at parse time cases where a NULLIF or IS DISTINCT FROM
construct would need to return a set. We've never supported that, but the
restriction is depended on in more subtle ways now, so it seems wise to
detect it at the start.
Also, provide some documentation about how to rewrite a SRF-within-CASE
query using a custom wrapper SRF.
It turns out that the information_schema.user_mapping_options view
contained an instance of exactly the behavior we're now forbidding; but
rewriting it makes it more clear and safer too.
initdb forced because of user_mapping_options change.
Patch by me, with error message suggestions from Alvaro Herrera and
Andres Freund, pursuant to a complaint from Regina Obe.
Discussion: https://postgr.es/m/000001d2d5de$d8d66170$8a832450$@pcorp.us
2017-06-14 05:46:39 +02:00
/* okay, but we don't allow nested SRFs here */
/* errmsg is chosen to match transformRangeFunction() */
/* errposition should point to the inner SRF */
if ( pstate - > p_last_srf ! = last_srf )
ereport ( ERROR ,
( errcode ( ERRCODE_FEATURE_NOT_SUPPORTED ) ,
errmsg ( " set-returning functions must appear at top level of FROM " ) ,
parser_errposition ( pstate ,
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
exprLocation ( pstate - > p_last_srf ) ) ) ) ;
2016-09-13 19:54:24 +02:00
break ;
case EXPR_KIND_WHERE :
errkind = true ;
break ;
case EXPR_KIND_POLICY :
err = _ ( " set-returning functions are not allowed in policy expressions " ) ;
break ;
case EXPR_KIND_HAVING :
errkind = true ;
break ;
case EXPR_KIND_FILTER :
errkind = true ;
break ;
case EXPR_KIND_WINDOW_PARTITION :
case EXPR_KIND_WINDOW_ORDER :
/* okay, these are effectively GROUP BY/ORDER BY */
pstate - > p_hasTargetSRFs = true ;
break ;
case EXPR_KIND_WINDOW_FRAME_RANGE :
case EXPR_KIND_WINDOW_FRAME_ROWS :
Support all SQL:2011 options for window frame clauses.
This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING"
frame boundaries in window functions. We'd punted on that back in the
original patch to add window functions, because it was not clear how to
do it in a reasonably data-type-extensible fashion. That problem is
resolved here by adding the ability for btree operator classes to provide
an "in_range" support function that defines how to add or subtract the
RANGE offset value. Factoring it this way also allows the operator class
to avoid overflow problems near the ends of the datatype's range, if it
wishes to expend effort on that. (In the committed patch, the integer
opclasses handle that issue, but it did not seem worth the trouble to
avoid overflow failures for datetime types.)
The patch includes in_range support for the integer_ops opfamily
(int2/int4/int8) as well as the standard datetime types. Support for
other numeric types has been requested, but that seems like suitable
material for a follow-on patch.
In addition, the patch adds GROUPS mode which counts the offset in
ORDER-BY peer groups rather than rows, and it adds the frame_exclusion
options specified by SQL:2011. As far as I can see, we are now fully
up to spec on window framing options.
Existing behaviors remain unchanged, except that I changed the errcode
for a couple of existing error reports to meet the SQL spec's expectation
that negative "offset" values should be reported as SQLSTATE 22013.
Internally and in relevant parts of the documentation, we now consistently
use the terminology "offset PRECEDING/FOLLOWING" rather than "value
PRECEDING/FOLLOWING", since the term "value" is confusingly vague.
Oliver Ford, reviewed and whacked around some by me
Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com
2018-02-07 06:06:50 +01:00
case EXPR_KIND_WINDOW_FRAME_GROUPS :
2016-09-13 19:54:24 +02:00
err = _ ( " set-returning functions are not allowed in window definitions " ) ;
break ;
case EXPR_KIND_SELECT_TARGET :
case EXPR_KIND_INSERT_TARGET :
/* okay */
pstate - > p_hasTargetSRFs = true ;
break ;
case EXPR_KIND_UPDATE_SOURCE :
case EXPR_KIND_UPDATE_TARGET :
/* disallowed because it would be ambiguous what to do */
errkind = true ;
break ;
case EXPR_KIND_GROUP_BY :
case EXPR_KIND_ORDER_BY :
/* okay */
pstate - > p_hasTargetSRFs = true ;
break ;
case EXPR_KIND_DISTINCT_ON :
/* okay */
pstate - > p_hasTargetSRFs = true ;
break ;
case EXPR_KIND_LIMIT :
case EXPR_KIND_OFFSET :
errkind = true ;
break ;
case EXPR_KIND_RETURNING :
errkind = true ;
break ;
case EXPR_KIND_VALUES :
2017-01-16 21:23:11 +01:00
/* SRFs are presently not supported by nodeValuesscan.c */
errkind = true ;
break ;
case EXPR_KIND_VALUES_SINGLE :
/* okay, since we process this like a SELECT tlist */
pstate - > p_hasTargetSRFs = true ;
2016-09-13 19:54:24 +02:00
break ;
case EXPR_KIND_CHECK_CONSTRAINT :
case EXPR_KIND_DOMAIN_CHECK :
err = _ ( " set-returning functions are not allowed in check constraints " ) ;
break ;
case EXPR_KIND_COLUMN_DEFAULT :
case EXPR_KIND_FUNCTION_DEFAULT :
err = _ ( " set-returning functions are not allowed in DEFAULT expressions " ) ;
break ;
case EXPR_KIND_INDEX_EXPRESSION :
err = _ ( " set-returning functions are not allowed in index expressions " ) ;
break ;
case EXPR_KIND_INDEX_PREDICATE :
err = _ ( " set-returning functions are not allowed in index predicates " ) ;
break ;
case EXPR_KIND_ALTER_COL_TRANSFORM :
err = _ ( " set-returning functions are not allowed in transform expressions " ) ;
break ;
case EXPR_KIND_EXECUTE_PARAMETER :
err = _ ( " set-returning functions are not allowed in EXECUTE parameters " ) ;
break ;
case EXPR_KIND_TRIGGER_WHEN :
err = _ ( " set-returning functions are not allowed in trigger WHEN conditions " ) ;
break ;
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
case EXPR_KIND_PARTITION_EXPRESSION :
Disallow set-returning functions inside CASE or COALESCE.
When we reimplemented SRFs in commit 69f4b9c85, our initial choice was
to allow the behavior to vary from historical practice in cases where a
SRF call appeared within a conditional-execution construct (currently,
only CASE or COALESCE). But that was controversial to begin with, and
subsequent discussion has resulted in a consensus that it's better to
throw an error instead of executing the query differently from before,
so long as we can provide a reasonably clear error message and a way to
rewrite the query.
Hence, add a parser mechanism to allow detection of such cases during
parse analysis. The mechanism just requires storing, in the ParseState,
a pointer to the set-returning FuncExpr or OpExpr most recently emitted
by parse analysis. Then the parsing functions for CASE and COALESCE can
detect the presence of a SRF in their arguments by noting whether this
pointer changes while analyzing their arguments. Furthermore, if it does,
it provides a suitable error cursor location for the complaint. (This
means that if there's more than one SRF in the arguments, the error will
point at the last one to be analyzed not the first. While connoisseurs of
parsing behavior might find that odd, it's unlikely the average user would
ever notice.)
While at it, we can also provide more specific error messages than before
about some pre-existing restrictions, such as no-SRFs-within-aggregates.
Also, reject at parse time cases where a NULLIF or IS DISTINCT FROM
construct would need to return a set. We've never supported that, but the
restriction is depended on in more subtle ways now, so it seems wise to
detect it at the start.
Also, provide some documentation about how to rewrite a SRF-within-CASE
query using a custom wrapper SRF.
It turns out that the information_schema.user_mapping_options view
contained an instance of exactly the behavior we're now forbidding; but
rewriting it makes it more clear and safer too.
initdb forced because of user_mapping_options change.
Patch by me, with error message suggestions from Alvaro Herrera and
Andres Freund, pursuant to a complaint from Regina Obe.
Discussion: https://postgr.es/m/000001d2d5de$d8d66170$8a832450$@pcorp.us
2017-06-14 05:46:39 +02:00
err = _ ( " set-returning functions are not allowed in partition key expressions " ) ;
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
break ;
2018-02-10 19:05:14 +01:00
case EXPR_KIND_CALL_ARGUMENT :
2017-11-30 14:46:13 +01:00
err = _ ( " set-returning functions are not allowed in CALL arguments " ) ;
break ;
2016-09-13 19:54:24 +02:00
/*
* There is intentionally no default : case here , so that the
* compiler will warn if we add a new ParseExprKind without
* extending this switch . If we do see an unrecognized value at
* runtime , the behavior will be the same as for EXPR_KIND_OTHER ,
* which is sane anyway .
*/
}
if ( err )
ereport ( ERROR ,
( errcode ( ERRCODE_FEATURE_NOT_SUPPORTED ) ,
errmsg_internal ( " %s " , err ) ,
parser_errposition ( pstate , location ) ) ) ;
if ( errkind )
ereport ( ERROR ,
( errcode ( ERRCODE_FEATURE_NOT_SUPPORTED ) ,
/* translator: %s is name of a SQL construct, eg GROUP BY */
errmsg ( " set-returning functions are not allowed in %s " ,
ParseExprKindName ( pstate - > p_expr_kind ) ) ,
parser_errposition ( pstate , location ) ) ) ;
}