postgresql/src/backend/parser/analyze.c

2999 lines
91 KiB
C
Raw Normal View History

/*-------------------------------------------------------------------------
*
* analyze.c
* transform the raw parse tree into a query tree
*
* For optimizable statements, we are careful to obtain a suitable lock on
* each referenced table, and other modules of the backend preserve or
* re-obtain these locks before depending on the results. It is therefore
* okay to do significant semantic analysis of these statements. For
* utility commands, no locks are obtained here (and if they were, we could
* not be sure we'd still have them at execution). Hence the general rule
* for utility commands is to just dump them into a Query node untransformed.
* DECLARE CURSOR, EXPLAIN, and CREATE TABLE AS are exceptions because they
* contain optimizable statements, which we should transform.
*
*
* Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
2010-09-20 22:08:53 +02:00
* src/backend/parser/analyze.c
*
*-------------------------------------------------------------------------
*/
1997-11-26 02:14:33 +01:00
#include "postgres.h"
1999-07-16 07:00:38 +02:00
#include "access/sysattr.h"
1999-07-16 07:00:38 +02:00
#include "catalog/pg_type.h"
#include "miscadmin.h"
#include "nodes/makefuncs.h"
#include "nodes/nodeFuncs.h"
#include "optimizer/optimizer.h"
#include "parser/analyze.h"
#include "parser/parse_agg.h"
1997-11-26 02:14:33 +01:00
#include "parser/parse_clause.h"
#include "parser/parse_coerce.h"
#include "parser/parse_collate.h"
#include "parser/parse_cte.h"
#include "parser/parse_expr.h"
#include "parser/parse_func.h"
#include "parser/parse_oper.h"
#include "parser/parse_param.h"
#include "parser/parse_relation.h"
#include "parser/parse_target.h"
#include "parser/parsetree.h"
#include "rewrite/rewriteManip.h"
#include "utils/rel.h"
/* Hook for plugins to get control at end of parse analysis */
post_parse_analyze_hook_type post_parse_analyze_hook = NULL;
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
static Query *transformOptionalSelectInto(ParseState *pstate, Node *parseTree);
static Query *transformDeleteStmt(ParseState *pstate, DeleteStmt *stmt);
static Query *transformInsertStmt(ParseState *pstate, InsertStmt *stmt);
static List *transformInsertRow(ParseState *pstate, List *exprlist,
List *stmtcols, List *icolumns, List *attrnos,
bool strip_indirection);
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
static OnConflictExpr *transformOnConflictClause(ParseState *pstate,
OnConflictClause *onConflictClause);
static int count_rowexpr_columns(ParseState *pstate, Node *expr);
1998-01-09 21:06:08 +01:00
static Query *transformSelectStmt(ParseState *pstate, SelectStmt *stmt);
static Query *transformValuesClause(ParseState *pstate, SelectStmt *stmt);
static Query *transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt);
static Node *transformSetOperationTree(ParseState *pstate, SelectStmt *stmt,
bool isTopLevel, List **targetlist);
static void determineRecursiveColTypes(ParseState *pstate,
Node *larg, List *nrtargetlist);
1998-01-09 21:06:08 +01:00
static Query *transformUpdateStmt(ParseState *pstate, UpdateStmt *stmt);
static List *transformReturningList(ParseState *pstate, List *returningList);
static List *transformUpdateTargetList(ParseState *pstate,
List *targetList);
static Query *transformDeclareCursorStmt(ParseState *pstate,
DeclareCursorStmt *stmt);
static Query *transformExplainStmt(ParseState *pstate,
ExplainStmt *stmt);
static Query *transformCreateTableAsStmt(ParseState *pstate,
CreateTableAsStmt *stmt);
static Query *transformCallStmt(ParseState *pstate,
CallStmt *stmt);
static void transformLockingClause(ParseState *pstate, Query *qry,
LockingClause *lc, bool pushedDown);
#ifdef RAW_EXPRESSION_COVERAGE_TEST
static bool test_raw_expression_coverage(Node *node, void *context);
#endif
/*
* parse_analyze
* Analyze a raw parse tree and transform it to Query form.
*
* Optionally, information about $n parameter types can be supplied.
* References to $n indexes not defined by paramTypes[] are disallowed.
*
* The result is a Query node. Optimizable statements require considerable
* transformation, while utility-type statements are simply hung off
* a dummy CMD_UTILITY Query node.
*/
Query *
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
parse_analyze(RawStmt *parseTree, const char *sourceText,
Oid *paramTypes, int numParams,
QueryEnvironment *queryEnv)
{
ParseState *pstate = make_parsestate(NULL);
Query *query;
Assert(sourceText != NULL); /* required as of 8.4 */
pstate->p_sourcetext = sourceText;
if (numParams > 0)
parse_fixed_parameters(pstate, paramTypes, numParams);
pstate->p_queryEnv = queryEnv;
query = transformTopLevelStmt(pstate, parseTree);
if (post_parse_analyze_hook)
(*post_parse_analyze_hook) (pstate, query);
free_parsestate(pstate);
return query;
}
/*
* parse_analyze_varparams
*
* This variant is used when it's okay to deduce information about $n
* symbol datatypes from context. The passed-in paramTypes[] array can
* be modified or enlarged (via repalloc).
*/
Query *
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
parse_analyze_varparams(RawStmt *parseTree, const char *sourceText,
Oid **paramTypes, int *numParams)
{
ParseState *pstate = make_parsestate(NULL);
Query *query;
Assert(sourceText != NULL); /* required as of 8.4 */
pstate->p_sourcetext = sourceText;
parse_variable_parameters(pstate, paramTypes, numParams);
query = transformTopLevelStmt(pstate, parseTree);
/* make sure all is well with parameter types */
check_variable_parameters(pstate, query);
if (post_parse_analyze_hook)
(*post_parse_analyze_hook) (pstate, query);
free_parsestate(pstate);
return query;
}
/*
* parse_sub_analyze
* Entry point for recursively analyzing a sub-statement.
*/
Query *
parse_sub_analyze(Node *parseTree, ParseState *parentParseState,
CommonTableExpr *parentCTE,
Change unknown-type literals to type text in SELECT and RETURNING lists. Previously, we left such literals alone if the query or subquery had no properties forcing a type decision to be made (such as an ORDER BY or DISTINCT clause using that output column). This meant that "unknown" could be an exposed output column type, which has never been a great idea because it could result in strange failures later on. For example, an outer query that tried to do any operations on an unknown-type subquery output would generally fail with some weird error like "failed to find conversion function from unknown to text" or "could not determine which collation to use for string comparison". Also, if the case occurred in a CREATE VIEW's query then the view would have an unknown-type column, causing similar failures in queries trying to use the view. To fix, at the tail end of parse analysis of a query, forcibly convert any remaining "unknown" literals in its SELECT or RETURNING list to type text. However, provide a switch to suppress that, and use it in the cases of SELECT inside a set operation or INSERT command. In those cases we already had type resolution rules that make use of context information from outside the subquery proper, and we don't want to change that behavior. Also, change creation of an unknown-type column in a relation from a warning to a hard error. The error should be unreachable now in CREATE VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown" in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because it's nothing but a foot-gun. This change creates a pg_upgrade failure case: a matview that contains an unknown-type column can't be pg_upgraded, because reparsing the matview's defining query will now decide that the column is of type text, which doesn't match the cstring-like storage that the old materialized column would actually have. Add a checking pass to detect that. While at it, we can detect tables or composite types that would fail, essentially for free. Those would fail safely anyway later on, but we might as well fail earlier. This patch is by me, but it owes something to previous investigations by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for review. Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
2017-01-25 15:17:18 +01:00
bool locked_from_parent,
bool resolve_unknowns)
{
ParseState *pstate = make_parsestate(parentParseState);
Query *query;
pstate->p_parent_cte = parentCTE;
pstate->p_locked_from_parent = locked_from_parent;
Change unknown-type literals to type text in SELECT and RETURNING lists. Previously, we left such literals alone if the query or subquery had no properties forcing a type decision to be made (such as an ORDER BY or DISTINCT clause using that output column). This meant that "unknown" could be an exposed output column type, which has never been a great idea because it could result in strange failures later on. For example, an outer query that tried to do any operations on an unknown-type subquery output would generally fail with some weird error like "failed to find conversion function from unknown to text" or "could not determine which collation to use for string comparison". Also, if the case occurred in a CREATE VIEW's query then the view would have an unknown-type column, causing similar failures in queries trying to use the view. To fix, at the tail end of parse analysis of a query, forcibly convert any remaining "unknown" literals in its SELECT or RETURNING list to type text. However, provide a switch to suppress that, and use it in the cases of SELECT inside a set operation or INSERT command. In those cases we already had type resolution rules that make use of context information from outside the subquery proper, and we don't want to change that behavior. Also, change creation of an unknown-type column in a relation from a warning to a hard error. The error should be unreachable now in CREATE VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown" in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because it's nothing but a foot-gun. This change creates a pg_upgrade failure case: a matview that contains an unknown-type column can't be pg_upgraded, because reparsing the matview's defining query will now decide that the column is of type text, which doesn't match the cstring-like storage that the old materialized column would actually have. Add a checking pass to detect that. While at it, we can detect tables or composite types that would fail, essentially for free. Those would fail safely anyway later on, but we might as well fail earlier. This patch is by me, but it owes something to previous investigations by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for review. Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
2017-01-25 15:17:18 +01:00
pstate->p_resolve_unknowns = resolve_unknowns;
query = transformStmt(pstate, parseTree);
free_parsestate(pstate);
1998-01-20 06:05:08 +01:00
return query;
}
/*
* transformTopLevelStmt -
* transform a Parse tree into a Query tree.
*
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
* This function is just responsible for transferring statement location data
* from the RawStmt into the finished Query.
*/
Query *
transformTopLevelStmt(ParseState *pstate, RawStmt *parseTree)
{
Query *result;
/* We're at top level, so allow SELECT INTO */
result = transformOptionalSelectInto(pstate, parseTree->stmt);
result->stmt_location = parseTree->stmt_location;
result->stmt_len = parseTree->stmt_len;
return result;
}
/*
* transformOptionalSelectInto -
* If SELECT has INTO, convert it to CREATE TABLE AS.
*
* The only thing we do here that we don't do in transformStmt() is to
* convert SELECT ... INTO into CREATE TABLE AS. Since utility statements
* aren't allowed within larger statements, this is only allowed at the top
* of the parse tree, and so we only try it before entering the recursive
* transformStmt() processing.
*/
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
static Query *
transformOptionalSelectInto(ParseState *pstate, Node *parseTree)
{
if (IsA(parseTree, SelectStmt))
{
SelectStmt *stmt = (SelectStmt *) parseTree;
/* If it's a set-operation tree, drill down to leftmost SelectStmt */
while (stmt && stmt->op != SETOP_NONE)
stmt = stmt->larg;
Assert(stmt && IsA(stmt, SelectStmt) &&stmt->larg == NULL);
if (stmt->intoClause)
{
CreateTableAsStmt *ctas = makeNode(CreateTableAsStmt);
ctas->query = parseTree;
ctas->into = stmt->intoClause;
ctas->relkind = OBJECT_TABLE;
ctas->is_select_into = true;
/*
* Remove the intoClause from the SelectStmt. This makes it safe
* for transformSelectStmt to complain if it finds intoClause set
* (implying that the INTO appeared in a disallowed place).
*/
stmt->intoClause = NULL;
parseTree = (Node *) ctas;
}
}
return transformStmt(pstate, parseTree);
}
/*
* transformStmt -
* recursively transform a Parse tree into a Query tree.
*/
Query *
transformStmt(ParseState *pstate, Node *parseTree)
{
Query *result;
/*
* We apply RAW_EXPRESSION_COVERAGE_TEST testing to basic DML statements;
* we can't just run it on everything because raw_expression_tree_walker()
* doesn't claim to handle utility statements.
*/
#ifdef RAW_EXPRESSION_COVERAGE_TEST
switch (nodeTag(parseTree))
{
case T_SelectStmt:
case T_InsertStmt:
case T_UpdateStmt:
case T_DeleteStmt:
(void) test_raw_expression_coverage(parseTree, NULL);
break;
default:
break;
}
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
#endif /* RAW_EXPRESSION_COVERAGE_TEST */
switch (nodeTag(parseTree))
{
/*
* Optimizable statements
*/
1998-01-09 21:06:08 +01:00
case T_InsertStmt:
result = transformInsertStmt(pstate, (InsertStmt *) parseTree);
break;
case T_DeleteStmt:
result = transformDeleteStmt(pstate, (DeleteStmt *) parseTree);
break;
1998-01-09 21:06:08 +01:00
case T_UpdateStmt:
result = transformUpdateStmt(pstate, (UpdateStmt *) parseTree);
break;
1998-01-09 21:06:08 +01:00
case T_SelectStmt:
{
SelectStmt *n = (SelectStmt *) parseTree;
if (n->valuesLists)
result = transformValuesClause(pstate, n);
else if (n->op == SETOP_NONE)
result = transformSelectStmt(pstate, n);
else
result = transformSetOperationStmt(pstate, n);
}
break;
/*
* Special cases
*/
case T_DeclareCursorStmt:
result = transformDeclareCursorStmt(pstate,
(DeclareCursorStmt *) parseTree);
break;
case T_ExplainStmt:
result = transformExplainStmt(pstate,
(ExplainStmt *) parseTree);
break;
case T_CreateTableAsStmt:
result = transformCreateTableAsStmt(pstate,
(CreateTableAsStmt *) parseTree);
break;
case T_CallStmt:
result = transformCallStmt(pstate,
(CallStmt *) parseTree);
2018-03-20 00:45:25 +01:00
break;
default:
/*
* other statements don't require any transformation; just return
* the original parsetree with a Query node plastered on top.
*/
result = makeNode(Query);
result->commandType = CMD_UTILITY;
result->utilityStmt = (Node *) parseTree;
break;
}
/* Mark as original query until we learn differently */
result->querySource = QSRC_ORIGINAL;
result->canSetTag = true;
return result;
}
/*
* analyze_requires_snapshot
* Returns true if a snapshot must be set before doing parse analysis
* on the given raw parse tree.
*
* Classification here should match transformStmt().
*/
bool
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
analyze_requires_snapshot(RawStmt *parseTree)
{
bool result;
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
switch (nodeTag(parseTree->stmt))
{
/*
* Optimizable statements
*/
case T_InsertStmt:
case T_DeleteStmt:
case T_UpdateStmt:
case T_SelectStmt:
result = true;
break;
/*
* Special cases
*/
case T_DeclareCursorStmt:
case T_ExplainStmt:
case T_CreateTableAsStmt:
/* yes, because we must analyze the contained statement */
result = true;
break;
default:
/* other utility statements don't have any real parse analysis */
result = false;
break;
}
return result;
}
/*
* transformDeleteStmt -
* transforms a Delete Statement
*/
static Query *
transformDeleteStmt(ParseState *pstate, DeleteStmt *stmt)
{
Query *qry = makeNode(Query);
ParseNamespaceItem *nsitem;
Node *qual;
qry->commandType = CMD_DELETE;
/* process the WITH clause independently of all else */
if (stmt->withClause)
{
qry->hasRecursive = stmt->withClause->recursive;
qry->cteList = transformWithClause(pstate, stmt->withClause);
qry->hasModifyingCTE = pstate->p_hasModifyingCTE;
}
/* set up range table with just the result rel */
qry->resultRelation = setTargetTable(pstate, stmt->relation,
stmt->relation->inh,
true,
ACL_DELETE);
/* grab the namespace item made by setTargetTable */
nsitem = (ParseNamespaceItem *) llast(pstate->p_namespace);
/* there's no DISTINCT in DELETE */
qry->distinctClause = NIL;
/* subqueries in USING cannot access the result relation */
nsitem->p_lateral_only = true;
nsitem->p_lateral_ok = false;
/*
2005-10-15 04:49:52 +02:00
* The USING clause is non-standard SQL syntax, and is equivalent in
* functionality to the FROM list that can be specified for UPDATE. The
* USING keyword is used rather than FROM because FROM is already a
* keyword in the DELETE syntax.
*/
transformFromClause(pstate, stmt->usingClause);
/* remaining clauses can reference the result relation normally */
nsitem->p_lateral_only = false;
nsitem->p_lateral_ok = true;
qual = transformWhereClause(pstate, stmt->whereClause,
EXPR_KIND_WHERE, "WHERE");
qry->returningList = transformReturningList(pstate, stmt->returningList);
/* done building the range table and jointree */
qry->rtable = pstate->p_rtable;
qry->jointree = makeFromExpr(pstate->p_joinlist, qual);
qry->hasSubLinks = pstate->p_hasSubLinks;
qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
Improve parser's and planner's handling of set-returning functions. Teach the parser to reject misplaced set-returning functions during parse analysis using p_expr_kind, in much the same way as we do for aggregates and window functions (cf commit eaccfded9). While this isn't complete (it misses nesting-based restrictions), it's much better than the previous error reporting for such cases, and it allows elimination of assorted ad-hoc expression_returns_set() error checks. We could add nesting checks later if it seems important to catch all cases at parse time. There is one case the parser will now throw error for although previous versions allowed it, which is SRFs in the tlist of an UPDATE. That never behaved sensibly (since it's ill-defined which generated row should be used to perform the update) and it's hard to see why it should not be treated as an error. It's a release-note-worthy change though. Also, add a new Query field hasTargetSRFs reporting whether there are any SRFs in the targetlist (including GROUP BY/ORDER BY expressions). The parser can now set that basically for free during parse analysis, and we can use it in a number of places to avoid expression_returns_set searches. (There will be more such checks soon.) In some places, this allows decontorting the logic since it's no longer expensive to check for SRFs in the tlist --- so I made the checks parallel to the handling of hasAggs/hasWindowFuncs wherever it seemed appropriate. catversion bump because adding a Query field changes stored rules. Andres Freund and Tom Lane Discussion: <24639.1473782855@sss.pgh.pa.us>
2016-09-13 19:54:24 +02:00
qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasAggs = pstate->p_hasAggs;
assign_query_collations(pstate, qry);
/* this must be done after collations, for reliable comparison of exprs */
if (pstate->p_hasAggs)
parseCheckAggregates(pstate, qry);
return qry;
}
/*
* transformInsertStmt -
* transform an Insert Statement
*/
static Query *
transformInsertStmt(ParseState *pstate, InsertStmt *stmt)
{
Query *qry = makeNode(Query);
SelectStmt *selectStmt = (SelectStmt *) stmt->selectStmt;
List *exprList = NIL;
bool isGeneralSelect;
List *sub_rtable;
List *sub_namespace;
1997-10-12 09:09:20 +02:00
List *icolumns;
List *attrnos;
RangeTblEntry *rte;
RangeTblRef *rtr;
ListCell *icols;
ListCell *attnos;
ListCell *lc;
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
bool isOnConflictUpdate;
AclMode targetPerms;
/* There can't be any outer WITH to worry about */
Assert(pstate->p_ctenamespace == NIL);
qry->commandType = CMD_INSERT;
pstate->p_is_insert = true;
/* process the WITH clause independently of all else */
if (stmt->withClause)
{
qry->hasRecursive = stmt->withClause->recursive;
qry->cteList = transformWithClause(pstate, stmt->withClause);
qry->hasModifyingCTE = pstate->p_hasModifyingCTE;
}
qry->override = stmt->override;
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
isOnConflictUpdate = (stmt->onConflictClause &&
stmt->onConflictClause->action == ONCONFLICT_UPDATE);
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
/*
* We have three cases to deal with: DEFAULT VALUES (selectStmt == NULL),
2006-10-04 02:30:14 +02:00
* VALUES list, or general SELECT input. We special-case VALUES, both for
* efficiency and so we can handle DEFAULT specifications.
*
* The grammar allows attaching ORDER BY, LIMIT, FOR UPDATE, or WITH to a
* VALUES clause. If we have any of those, treat it as a general SELECT;
* so it will work, but you can't use DEFAULT items together with those.
*/
isGeneralSelect = (selectStmt && (selectStmt->valuesLists == NIL ||
selectStmt->sortClause != NIL ||
selectStmt->limitOffset != NULL ||
selectStmt->limitCount != NULL ||
selectStmt->lockingClause != NIL ||
selectStmt->withClause != NULL));
/*
* If a non-nil rangetable/namespace was passed in, and we are doing
* INSERT/SELECT, arrange to pass the rangetable/namespace down to the
* SELECT. This can only happen if we are inside a CREATE RULE, and in
2005-10-15 04:49:52 +02:00
* that case we want the rule's OLD and NEW rtable entries to appear as
* part of the SELECT's rtable, not as outer references for it. (Kluge!)
* The SELECT's joinlist is not affected however. We must do this before
* adding the target table to the INSERT's rtable.
*/
if (isGeneralSelect)
{
sub_rtable = pstate->p_rtable;
pstate->p_rtable = NIL;
sub_namespace = pstate->p_namespace;
pstate->p_namespace = NIL;
}
else
{
sub_rtable = NIL; /* not used, but keep compiler quiet */
sub_namespace = NIL;
}
/*
2005-10-15 04:49:52 +02:00
* Must get write lock on INSERT target table before scanning SELECT, else
* we will grab the wrong kind of initial lock if the target table is also
* mentioned in the SELECT part. Note that the target table is not added
* to the joinlist or namespace.
*/
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
targetPerms = ACL_INSERT;
if (isOnConflictUpdate)
targetPerms |= ACL_UPDATE;
qry->resultRelation = setTargetTable(pstate, stmt->relation,
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
false, false, targetPerms);
/* Validate stmt->cols list, or build default list if no list given */
icolumns = checkInsertTargets(pstate, stmt->cols, &attrnos);
Assert(list_length(icolumns) == list_length(attrnos));
/*
* Determine which variant of INSERT we have.
1999-05-25 18:15:34 +02:00
*/
if (selectStmt == NULL)
{
/*
* We have INSERT ... DEFAULT VALUES. We can handle this case by
2006-10-04 02:30:14 +02:00
* emitting an empty targetlist --- all columns will be defaulted when
* the planner expands the targetlist.
*/
exprList = NIL;
}
else if (isGeneralSelect)
{
/*
2005-10-15 04:49:52 +02:00
* We make the sub-pstate a child of the outer pstate so that it can
* see any Param definitions supplied from above. Since the outer
* pstate's rtable and namespace are presently empty, there are no
* side-effects of exposing names the sub-SELECT shouldn't be able to
* see.
*/
ParseState *sub_pstate = make_parsestate(pstate);
Query *selectQuery;
1999-05-25 18:15:34 +02:00
/*
* Process the source SELECT.
*
2005-10-15 04:49:52 +02:00
* It is important that this be handled just like a standalone SELECT;
* otherwise the behavior of SELECT within INSERT might be different
* from a stand-alone SELECT. (Indeed, Postgres up through 6.5 had
* bugs of just that nature...)
Change unknown-type literals to type text in SELECT and RETURNING lists. Previously, we left such literals alone if the query or subquery had no properties forcing a type decision to be made (such as an ORDER BY or DISTINCT clause using that output column). This meant that "unknown" could be an exposed output column type, which has never been a great idea because it could result in strange failures later on. For example, an outer query that tried to do any operations on an unknown-type subquery output would generally fail with some weird error like "failed to find conversion function from unknown to text" or "could not determine which collation to use for string comparison". Also, if the case occurred in a CREATE VIEW's query then the view would have an unknown-type column, causing similar failures in queries trying to use the view. To fix, at the tail end of parse analysis of a query, forcibly convert any remaining "unknown" literals in its SELECT or RETURNING list to type text. However, provide a switch to suppress that, and use it in the cases of SELECT inside a set operation or INSERT command. In those cases we already had type resolution rules that make use of context information from outside the subquery proper, and we don't want to change that behavior. Also, change creation of an unknown-type column in a relation from a warning to a hard error. The error should be unreachable now in CREATE VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown" in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because it's nothing but a foot-gun. This change creates a pg_upgrade failure case: a matview that contains an unknown-type column can't be pg_upgraded, because reparsing the matview's defining query will now decide that the column is of type text, which doesn't match the cstring-like storage that the old materialized column would actually have. Add a checking pass to detect that. While at it, we can detect tables or composite types that would fail, essentially for free. Those would fail safely anyway later on, but we might as well fail earlier. This patch is by me, but it owes something to previous investigations by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for review. Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
2017-01-25 15:17:18 +01:00
*
* The sole exception is that we prevent resolving unknown-type
* outputs as TEXT. This does not change the semantics since if the
* column type matters semantically, it would have been resolved to
* something else anyway. Doing this lets us resolve such outputs as
* the target column's type, which we handle below.
*/
sub_pstate->p_rtable = sub_rtable;
sub_pstate->p_joinexprs = NIL; /* sub_rtable has no joins */
sub_pstate->p_namespace = sub_namespace;
Change unknown-type literals to type text in SELECT and RETURNING lists. Previously, we left such literals alone if the query or subquery had no properties forcing a type decision to be made (such as an ORDER BY or DISTINCT clause using that output column). This meant that "unknown" could be an exposed output column type, which has never been a great idea because it could result in strange failures later on. For example, an outer query that tried to do any operations on an unknown-type subquery output would generally fail with some weird error like "failed to find conversion function from unknown to text" or "could not determine which collation to use for string comparison". Also, if the case occurred in a CREATE VIEW's query then the view would have an unknown-type column, causing similar failures in queries trying to use the view. To fix, at the tail end of parse analysis of a query, forcibly convert any remaining "unknown" literals in its SELECT or RETURNING list to type text. However, provide a switch to suppress that, and use it in the cases of SELECT inside a set operation or INSERT command. In those cases we already had type resolution rules that make use of context information from outside the subquery proper, and we don't want to change that behavior. Also, change creation of an unknown-type column in a relation from a warning to a hard error. The error should be unreachable now in CREATE VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown" in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because it's nothing but a foot-gun. This change creates a pg_upgrade failure case: a matview that contains an unknown-type column can't be pg_upgraded, because reparsing the matview's defining query will now decide that the column is of type text, which doesn't match the cstring-like storage that the old materialized column would actually have. Add a checking pass to detect that. While at it, we can detect tables or composite types that would fail, essentially for free. Those would fail safely anyway later on, but we might as well fail earlier. This patch is by me, but it owes something to previous investigations by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for review. Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
2017-01-25 15:17:18 +01:00
sub_pstate->p_resolve_unknowns = false;
selectQuery = transformStmt(sub_pstate, stmt->selectStmt);
free_parsestate(sub_pstate);
/* The grammar should have produced a SELECT */
if (!IsA(selectQuery, Query) ||
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
selectQuery->commandType != CMD_SELECT)
elog(ERROR, "unexpected non-SELECT command in INSERT ... SELECT");
2001-03-22 05:01:46 +01:00
/*
2005-10-15 04:49:52 +02:00
* Make the source be a subquery in the INSERT's rangetable, and add
* it to the INSERT's joinlist.
*/
rte = addRangeTableEntryForSubquery(pstate,
selectQuery,
makeAlias("*SELECT*", NIL),
false,
false);
rtr = makeNode(RangeTblRef);
/* assume new rte is at end */
rtr->rtindex = list_length(pstate->p_rtable);
Assert(rte == rt_fetch(rtr->rtindex, pstate->p_rtable));
pstate->p_joinlist = lappend(pstate->p_joinlist, rtr);
2001-03-22 05:01:46 +01:00
/*----------
* Generate an expression list for the INSERT that selects all the
* non-resjunk columns from the subquery. (INSERT's tlist must be
2001-03-22 05:01:46 +01:00
* separate from the subquery's tlist because we may add columns,
* insert datatype coercions, etc.)
*
* HACK: unknown-type constants and params in the SELECT's targetlist
* are copied up as-is rather than being referenced as subquery
2003-08-04 02:43:34 +02:00
* outputs. This is to ensure that when we try to coerce them to
* the target column's datatype, the right things happen (see
* special cases in coerce_type). Otherwise, this fails:
* INSERT INTO foo SELECT 'bar', ... FROM baz
*----------
*/
exprList = NIL;
foreach(lc, selectQuery->targetList)
{
TargetEntry *tle = (TargetEntry *) lfirst(lc);
Expr *expr;
if (tle->resjunk)
continue;
if (tle->expr &&
2005-10-15 04:49:52 +02:00
(IsA(tle->expr, Const) ||IsA(tle->expr, Param)) &&
exprType((Node *) tle->expr) == UNKNOWNOID)
expr = tle->expr;
else
{
Var *var = makeVarFromTargetEntry(rtr->rtindex, tle);
var->location = exprLocation((Node *) tle->expr);
expr = (Expr *) var;
}
exprList = lappend(exprList, expr);
}
/* Prepare row for assignment to target table */
exprList = transformInsertRow(pstate, exprList,
stmt->cols,
Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better. Previously, if an INSERT with multiple rows of VALUES had indirection (array subscripting or field selection) in its target-columns list, the parser handled that by applying transformAssignedExpr() to each element of each VALUES row independently. This led to having ArrayRef assignment nodes or FieldStore nodes in each row of the VALUES RTE. That works for simple cases, but in bug #14265 Nuri Boardman points out that it fails if there are multiple assignments to elements/fields of the same target column. For such cases to work, rewriteTargetListIU() has to nest the ArrayRefs or FieldStores together to produce a single expression to be assigned to the column. But it failed to find them in the top-level targetlist and issued an error about "multiple assignments to same column". We could possibly fix this by teaching the rewriter to apply rewriteTargetListIU to each VALUES row separately, but that would be messy (it would change the output rowtype of the VALUES RTE, for example) and inefficient. Instead, let's fix the parser so that the VALUES RTE outputs are just the user-specified values, cast to the right type if necessary, and then the ArrayRefs or FieldStores are applied in the top-level targetlist to Vars representing the RTE's outputs. This is the same parsetree representation already used for similar cases with INSERT/SELECT syntax, so it allows simplifications in ruleutils.c, which no longer needs to treat INSERT-from-multiple-VALUES as its own special case. This implementation works by applying transformAssignedExpr to the VALUES entries as before, and then stripping off any ArrayRefs or FieldStores it adds. With lots of VALUES rows it would be noticeably more efficient to not add those nodes in the first place. But that's just an optimization not a bug fix, and there doesn't seem to be any good way to do it without significant refactoring. (A non-invasive answer would be to apply transformAssignedExpr + stripping to just the first VALUES row, and then just forcibly cast remaining rows to the same data types exposed in the first row. But this way would lead to different, not-INSERT-specific errors being reported in casting failure cases, so it doesn't seem very nice.) So leave that for later; this patch at least isn't making the per-row parsing work worse, and it does make the finished parsetree smaller, saving rewriter and planner work. Catversion bump because stored rules containing such INSERTs would need to change. Because of that, no back-patch, even though this is a very long-standing bug. Report: <20160727005725.7438.26021@wrigleys.postgresql.org> Discussion: <9578.1469645245@sss.pgh.pa.us>
2016-08-03 22:37:03 +02:00
icolumns, attrnos,
false);
}
else if (list_length(selectStmt->valuesLists) > 1)
{
/*
2006-10-04 02:30:14 +02:00
* Process INSERT ... VALUES with multiple VALUES sublists. We
* generate a VALUES RTE holding the transformed expression lists, and
* build up a targetlist containing Vars that reference the VALUES
* RTE.
*/
List *exprsLists = NIL;
Fix reporting of column typmods for multi-row VALUES constructs. expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. It's cheap for transformValuesClause to determine the common typmod while transforming a multi-row VALUES, but it'd be less cheap for expandRTE() and get_rte_attribute_type() to re-determine that info every time they're asked --- possibly a lot less cheap, if the VALUES has many rows. Therefore, the best fix is to record the common typmods explicitly in a list in the VALUES RTE, as we were already doing for column collations. This looks quite a bit like what we're doing for CTE RTEs, so we can save a little bit of space and code by unifying the representation for those two RTE types. They both now share coltypes/coltypmods/colcollations fields. (At some point it might seem desirable to populate those fields for all RTE types; but right now it looks like constructing them for other RTE types would add more code and cycles than it would save.) The RTE change requires a catversion bump, so this fix is only usable in HEAD. If we fix this at all in the back branches, the patch will need to look quite different. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
List *coltypes = NIL;
List *coltypmods = NIL;
List *colcollations = NIL;
int sublist_length = -1;
bool lateral = false;
Assert(selectStmt->intoClause == NULL);
foreach(lc, selectStmt->valuesLists)
{
2006-10-04 02:30:14 +02:00
List *sublist = (List *) lfirst(lc);
Improve handling of "UPDATE ... SET (column_list) = row_constructor". Previously, the right-hand side of a multiple-column assignment, if it wasn't a sub-SELECT, had to be a simple parenthesized expression list, because gram.y was responsible for "bursting" the construct into independent column assignments. This had the minor defect that you couldn't write ROW (though you should be able to, since the standard says this is a row constructor), and the rather larger defect that unlike other uses of row constructors, we would not expand a "foo.*" item into multiple columns. Fix that by changing the RHS to be just "a_expr" in the grammar, leaving it to transformMultiAssignRef to separate the elements of a RowExpr; which it will do only after performing standard transformation of the RowExpr, so that "foo.*" behaves as expected. The key reason we didn't do that before was the hard-wired handling of DEFAULT tokens (SetToDefault nodes). This patch deals with that issue by allowing DEFAULT in any a_expr and having parse analysis throw an error if SetToDefault is found in an unexpected place. That's an improvement anyway since the error can be more specific than just "syntax error". The SQL standard suggests that the RHS could be any a_expr yielding a suitable row value. This patch doesn't really move the goal posts in that respect --- you're still limited to RowExpr or a sub-SELECT --- but it does fix the grammar restriction, so it provides some tangible progress towards a full implementation. And the limitation is now documented by an explicit error message rather than an unhelpful "syntax error". Discussion: <8542.1479742008@sss.pgh.pa.us>
2016-11-22 21:19:57 +01:00
/*
* Do basic expression transformation (same as a ROW() expr, but
* allow SetToDefault at top level)
*/
sublist = transformExpressionList(pstate, sublist,
EXPR_KIND_VALUES, true);
/*
2006-10-04 02:30:14 +02:00
* All the sublists must be the same length, *after*
* transformation (which might expand '*' into multiple items).
* The VALUES RTE can't handle anything different.
*/
if (sublist_length < 0)
{
/* Remember post-transformation length of first sublist */
sublist_length = list_length(sublist);
}
else if (sublist_length != list_length(sublist))
{
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("VALUES lists must all be the same length"),
parser_errposition(pstate,
exprLocation((Node *) sublist))));
}
Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better. Previously, if an INSERT with multiple rows of VALUES had indirection (array subscripting or field selection) in its target-columns list, the parser handled that by applying transformAssignedExpr() to each element of each VALUES row independently. This led to having ArrayRef assignment nodes or FieldStore nodes in each row of the VALUES RTE. That works for simple cases, but in bug #14265 Nuri Boardman points out that it fails if there are multiple assignments to elements/fields of the same target column. For such cases to work, rewriteTargetListIU() has to nest the ArrayRefs or FieldStores together to produce a single expression to be assigned to the column. But it failed to find them in the top-level targetlist and issued an error about "multiple assignments to same column". We could possibly fix this by teaching the rewriter to apply rewriteTargetListIU to each VALUES row separately, but that would be messy (it would change the output rowtype of the VALUES RTE, for example) and inefficient. Instead, let's fix the parser so that the VALUES RTE outputs are just the user-specified values, cast to the right type if necessary, and then the ArrayRefs or FieldStores are applied in the top-level targetlist to Vars representing the RTE's outputs. This is the same parsetree representation already used for similar cases with INSERT/SELECT syntax, so it allows simplifications in ruleutils.c, which no longer needs to treat INSERT-from-multiple-VALUES as its own special case. This implementation works by applying transformAssignedExpr to the VALUES entries as before, and then stripping off any ArrayRefs or FieldStores it adds. With lots of VALUES rows it would be noticeably more efficient to not add those nodes in the first place. But that's just an optimization not a bug fix, and there doesn't seem to be any good way to do it without significant refactoring. (A non-invasive answer would be to apply transformAssignedExpr + stripping to just the first VALUES row, and then just forcibly cast remaining rows to the same data types exposed in the first row. But this way would lead to different, not-INSERT-specific errors being reported in casting failure cases, so it doesn't seem very nice.) So leave that for later; this patch at least isn't making the per-row parsing work worse, and it does make the finished parsetree smaller, saving rewriter and planner work. Catversion bump because stored rules containing such INSERTs would need to change. Because of that, no back-patch, even though this is a very long-standing bug. Report: <20160727005725.7438.26021@wrigleys.postgresql.org> Discussion: <9578.1469645245@sss.pgh.pa.us>
2016-08-03 22:37:03 +02:00
/*
* Prepare row for assignment to target table. We process any
* indirection on the target column specs normally but then strip
* off the resulting field/array assignment nodes, since we don't
* want the parsed statement to contain copies of those in each
* VALUES row. (It's annoying to have to transform the
* indirection specs over and over like this, but avoiding it
* would take some really messy refactoring of
* transformAssignmentIndirection.)
*/
sublist = transformInsertRow(pstate, sublist,
stmt->cols,
Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better. Previously, if an INSERT with multiple rows of VALUES had indirection (array subscripting or field selection) in its target-columns list, the parser handled that by applying transformAssignedExpr() to each element of each VALUES row independently. This led to having ArrayRef assignment nodes or FieldStore nodes in each row of the VALUES RTE. That works for simple cases, but in bug #14265 Nuri Boardman points out that it fails if there are multiple assignments to elements/fields of the same target column. For such cases to work, rewriteTargetListIU() has to nest the ArrayRefs or FieldStores together to produce a single expression to be assigned to the column. But it failed to find them in the top-level targetlist and issued an error about "multiple assignments to same column". We could possibly fix this by teaching the rewriter to apply rewriteTargetListIU to each VALUES row separately, but that would be messy (it would change the output rowtype of the VALUES RTE, for example) and inefficient. Instead, let's fix the parser so that the VALUES RTE outputs are just the user-specified values, cast to the right type if necessary, and then the ArrayRefs or FieldStores are applied in the top-level targetlist to Vars representing the RTE's outputs. This is the same parsetree representation already used for similar cases with INSERT/SELECT syntax, so it allows simplifications in ruleutils.c, which no longer needs to treat INSERT-from-multiple-VALUES as its own special case. This implementation works by applying transformAssignedExpr to the VALUES entries as before, and then stripping off any ArrayRefs or FieldStores it adds. With lots of VALUES rows it would be noticeably more efficient to not add those nodes in the first place. But that's just an optimization not a bug fix, and there doesn't seem to be any good way to do it without significant refactoring. (A non-invasive answer would be to apply transformAssignedExpr + stripping to just the first VALUES row, and then just forcibly cast remaining rows to the same data types exposed in the first row. But this way would lead to different, not-INSERT-specific errors being reported in casting failure cases, so it doesn't seem very nice.) So leave that for later; this patch at least isn't making the per-row parsing work worse, and it does make the finished parsetree smaller, saving rewriter and planner work. Catversion bump because stored rules containing such INSERTs would need to change. Because of that, no back-patch, even though this is a very long-standing bug. Report: <20160727005725.7438.26021@wrigleys.postgresql.org> Discussion: <9578.1469645245@sss.pgh.pa.us>
2016-08-03 22:37:03 +02:00
icolumns, attrnos,
true);
/*
* We must assign collations now because assign_query_collations
* doesn't process rangetable entries. We just assign all the
* collations independently in each row, and don't worry about
* whether they are consistent vertically. The outer INSERT query
* isn't going to care about the collations of the VALUES columns,
* so it's not worth the effort to identify a common collation for
* each one here. (But note this does have one user-visible
* consequence: INSERT ... VALUES won't complain about conflicting
* explicit COLLATEs in a column, whereas the same VALUES
* construct in another context would complain.)
*/
assign_list_collations(pstate, sublist);
exprsLists = lappend(exprsLists, sublist);
}
/*
Fix reporting of column typmods for multi-row VALUES constructs. expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. It's cheap for transformValuesClause to determine the common typmod while transforming a multi-row VALUES, but it'd be less cheap for expandRTE() and get_rte_attribute_type() to re-determine that info every time they're asked --- possibly a lot less cheap, if the VALUES has many rows. Therefore, the best fix is to record the common typmods explicitly in a list in the VALUES RTE, as we were already doing for column collations. This looks quite a bit like what we're doing for CTE RTEs, so we can save a little bit of space and code by unifying the representation for those two RTE types. They both now share coltypes/coltypmods/colcollations fields. (At some point it might seem desirable to populate those fields for all RTE types; but right now it looks like constructing them for other RTE types would add more code and cycles than it would save.) The RTE change requires a catversion bump, so this fix is only usable in HEAD. If we fix this at all in the back branches, the patch will need to look quite different. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
* Construct column type/typmod/collation lists for the VALUES RTE.
* Every expression in each column has been coerced to the type/typmod
* of the corresponding target column or subfield, so it's sufficient
* to look at the exprType/exprTypmod of the first row. We don't care
* about the collation labeling, so just fill in InvalidOid for that.
*/
Fix reporting of column typmods for multi-row VALUES constructs. expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. It's cheap for transformValuesClause to determine the common typmod while transforming a multi-row VALUES, but it'd be less cheap for expandRTE() and get_rte_attribute_type() to re-determine that info every time they're asked --- possibly a lot less cheap, if the VALUES has many rows. Therefore, the best fix is to record the common typmods explicitly in a list in the VALUES RTE, as we were already doing for column collations. This looks quite a bit like what we're doing for CTE RTEs, so we can save a little bit of space and code by unifying the representation for those two RTE types. They both now share coltypes/coltypmods/colcollations fields. (At some point it might seem desirable to populate those fields for all RTE types; but right now it looks like constructing them for other RTE types would add more code and cycles than it would save.) The RTE change requires a catversion bump, so this fix is only usable in HEAD. If we fix this at all in the back branches, the patch will need to look quite different. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
foreach(lc, (List *) linitial(exprsLists))
{
Node *val = (Node *) lfirst(lc);
coltypes = lappend_oid(coltypes, exprType(val));
coltypmods = lappend_int(coltypmods, exprTypmod(val));
colcollations = lappend_oid(colcollations, InvalidOid);
}
/*
* Ordinarily there can't be any current-level Vars in the expression
* lists, because the namespace was empty ... but if we're inside
* CREATE RULE, then NEW/OLD references might appear. In that case we
* have to mark the VALUES RTE as LATERAL.
*/
if (list_length(pstate->p_rtable) != 1 &&
contain_vars_of_level((Node *) exprsLists, 0))
lateral = true;
/*
* Generate the VALUES RTE
*/
Fix reporting of column typmods for multi-row VALUES constructs. expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. It's cheap for transformValuesClause to determine the common typmod while transforming a multi-row VALUES, but it'd be less cheap for expandRTE() and get_rte_attribute_type() to re-determine that info every time they're asked --- possibly a lot less cheap, if the VALUES has many rows. Therefore, the best fix is to record the common typmods explicitly in a list in the VALUES RTE, as we were already doing for column collations. This looks quite a bit like what we're doing for CTE RTEs, so we can save a little bit of space and code by unifying the representation for those two RTE types. They both now share coltypes/coltypmods/colcollations fields. (At some point it might seem desirable to populate those fields for all RTE types; but right now it looks like constructing them for other RTE types would add more code and cycles than it would save.) The RTE change requires a catversion bump, so this fix is only usable in HEAD. If we fix this at all in the back branches, the patch will need to look quite different. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
rte = addRangeTableEntryForValues(pstate, exprsLists,
coltypes, coltypmods, colcollations,
NULL, lateral, true);
rtr = makeNode(RangeTblRef);
/* assume new rte is at end */
rtr->rtindex = list_length(pstate->p_rtable);
Assert(rte == rt_fetch(rtr->rtindex, pstate->p_rtable));
pstate->p_joinlist = lappend(pstate->p_joinlist, rtr);
/*
* Generate list of Vars referencing the RTE
*/
expandRTE(rte, rtr->rtindex, 0, -1, false, NULL, &exprList);
Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better. Previously, if an INSERT with multiple rows of VALUES had indirection (array subscripting or field selection) in its target-columns list, the parser handled that by applying transformAssignedExpr() to each element of each VALUES row independently. This led to having ArrayRef assignment nodes or FieldStore nodes in each row of the VALUES RTE. That works for simple cases, but in bug #14265 Nuri Boardman points out that it fails if there are multiple assignments to elements/fields of the same target column. For such cases to work, rewriteTargetListIU() has to nest the ArrayRefs or FieldStores together to produce a single expression to be assigned to the column. But it failed to find them in the top-level targetlist and issued an error about "multiple assignments to same column". We could possibly fix this by teaching the rewriter to apply rewriteTargetListIU to each VALUES row separately, but that would be messy (it would change the output rowtype of the VALUES RTE, for example) and inefficient. Instead, let's fix the parser so that the VALUES RTE outputs are just the user-specified values, cast to the right type if necessary, and then the ArrayRefs or FieldStores are applied in the top-level targetlist to Vars representing the RTE's outputs. This is the same parsetree representation already used for similar cases with INSERT/SELECT syntax, so it allows simplifications in ruleutils.c, which no longer needs to treat INSERT-from-multiple-VALUES as its own special case. This implementation works by applying transformAssignedExpr to the VALUES entries as before, and then stripping off any ArrayRefs or FieldStores it adds. With lots of VALUES rows it would be noticeably more efficient to not add those nodes in the first place. But that's just an optimization not a bug fix, and there doesn't seem to be any good way to do it without significant refactoring. (A non-invasive answer would be to apply transformAssignedExpr + stripping to just the first VALUES row, and then just forcibly cast remaining rows to the same data types exposed in the first row. But this way would lead to different, not-INSERT-specific errors being reported in casting failure cases, so it doesn't seem very nice.) So leave that for later; this patch at least isn't making the per-row parsing work worse, and it does make the finished parsetree smaller, saving rewriter and planner work. Catversion bump because stored rules containing such INSERTs would need to change. Because of that, no back-patch, even though this is a very long-standing bug. Report: <20160727005725.7438.26021@wrigleys.postgresql.org> Discussion: <9578.1469645245@sss.pgh.pa.us>
2016-08-03 22:37:03 +02:00
/*
* Re-apply any indirection on the target column specs to the Vars
*/
exprList = transformInsertRow(pstate, exprList,
stmt->cols,
icolumns, attrnos,
false);
}
else
{
/*
* Process INSERT ... VALUES with a single VALUES sublist. We treat
* this case separately for efficiency. The sublist is just computed
* directly as the Query's targetlist, with no VALUES RTE. So it
* works just like a SELECT without any FROM.
*/
List *valuesLists = selectStmt->valuesLists;
1998-01-11 04:41:57 +01:00
Assert(list_length(valuesLists) == 1);
Assert(selectStmt->intoClause == NULL);
Improve handling of "UPDATE ... SET (column_list) = row_constructor". Previously, the right-hand side of a multiple-column assignment, if it wasn't a sub-SELECT, had to be a simple parenthesized expression list, because gram.y was responsible for "bursting" the construct into independent column assignments. This had the minor defect that you couldn't write ROW (though you should be able to, since the standard says this is a row constructor), and the rather larger defect that unlike other uses of row constructors, we would not expand a "foo.*" item into multiple columns. Fix that by changing the RHS to be just "a_expr" in the grammar, leaving it to transformMultiAssignRef to separate the elements of a RowExpr; which it will do only after performing standard transformation of the RowExpr, so that "foo.*" behaves as expected. The key reason we didn't do that before was the hard-wired handling of DEFAULT tokens (SetToDefault nodes). This patch deals with that issue by allowing DEFAULT in any a_expr and having parse analysis throw an error if SetToDefault is found in an unexpected place. That's an improvement anyway since the error can be more specific than just "syntax error". The SQL standard suggests that the RHS could be any a_expr yielding a suitable row value. This patch doesn't really move the goal posts in that respect --- you're still limited to RowExpr or a sub-SELECT --- but it does fix the grammar restriction, so it provides some tangible progress towards a full implementation. And the limitation is now documented by an explicit error message rather than an unhelpful "syntax error". Discussion: <8542.1479742008@sss.pgh.pa.us>
2016-11-22 21:19:57 +01:00
/*
* Do basic expression transformation (same as a ROW() expr, but allow
* SetToDefault at top level)
*/
exprList = transformExpressionList(pstate,
(List *) linitial(valuesLists),
EXPR_KIND_VALUES_SINGLE,
Improve handling of "UPDATE ... SET (column_list) = row_constructor". Previously, the right-hand side of a multiple-column assignment, if it wasn't a sub-SELECT, had to be a simple parenthesized expression list, because gram.y was responsible for "bursting" the construct into independent column assignments. This had the minor defect that you couldn't write ROW (though you should be able to, since the standard says this is a row constructor), and the rather larger defect that unlike other uses of row constructors, we would not expand a "foo.*" item into multiple columns. Fix that by changing the RHS to be just "a_expr" in the grammar, leaving it to transformMultiAssignRef to separate the elements of a RowExpr; which it will do only after performing standard transformation of the RowExpr, so that "foo.*" behaves as expected. The key reason we didn't do that before was the hard-wired handling of DEFAULT tokens (SetToDefault nodes). This patch deals with that issue by allowing DEFAULT in any a_expr and having parse analysis throw an error if SetToDefault is found in an unexpected place. That's an improvement anyway since the error can be more specific than just "syntax error". The SQL standard suggests that the RHS could be any a_expr yielding a suitable row value. This patch doesn't really move the goal posts in that respect --- you're still limited to RowExpr or a sub-SELECT --- but it does fix the grammar restriction, so it provides some tangible progress towards a full implementation. And the limitation is now documented by an explicit error message rather than an unhelpful "syntax error". Discussion: <8542.1479742008@sss.pgh.pa.us>
2016-11-22 21:19:57 +01:00
true);
/* Prepare row for assignment to target table */
exprList = transformInsertRow(pstate, exprList,
stmt->cols,
Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better. Previously, if an INSERT with multiple rows of VALUES had indirection (array subscripting or field selection) in its target-columns list, the parser handled that by applying transformAssignedExpr() to each element of each VALUES row independently. This led to having ArrayRef assignment nodes or FieldStore nodes in each row of the VALUES RTE. That works for simple cases, but in bug #14265 Nuri Boardman points out that it fails if there are multiple assignments to elements/fields of the same target column. For such cases to work, rewriteTargetListIU() has to nest the ArrayRefs or FieldStores together to produce a single expression to be assigned to the column. But it failed to find them in the top-level targetlist and issued an error about "multiple assignments to same column". We could possibly fix this by teaching the rewriter to apply rewriteTargetListIU to each VALUES row separately, but that would be messy (it would change the output rowtype of the VALUES RTE, for example) and inefficient. Instead, let's fix the parser so that the VALUES RTE outputs are just the user-specified values, cast to the right type if necessary, and then the ArrayRefs or FieldStores are applied in the top-level targetlist to Vars representing the RTE's outputs. This is the same parsetree representation already used for similar cases with INSERT/SELECT syntax, so it allows simplifications in ruleutils.c, which no longer needs to treat INSERT-from-multiple-VALUES as its own special case. This implementation works by applying transformAssignedExpr to the VALUES entries as before, and then stripping off any ArrayRefs or FieldStores it adds. With lots of VALUES rows it would be noticeably more efficient to not add those nodes in the first place. But that's just an optimization not a bug fix, and there doesn't seem to be any good way to do it without significant refactoring. (A non-invasive answer would be to apply transformAssignedExpr + stripping to just the first VALUES row, and then just forcibly cast remaining rows to the same data types exposed in the first row. But this way would lead to different, not-INSERT-specific errors being reported in casting failure cases, so it doesn't seem very nice.) So leave that for later; this patch at least isn't making the per-row parsing work worse, and it does make the finished parsetree smaller, saving rewriter and planner work. Catversion bump because stored rules containing such INSERTs would need to change. Because of that, no back-patch, even though this is a very long-standing bug. Report: <20160727005725.7438.26021@wrigleys.postgresql.org> Discussion: <9578.1469645245@sss.pgh.pa.us>
2016-08-03 22:37:03 +02:00
icolumns, attrnos,
false);
}
/*
* Generate query's target list using the computed list of expressions.
* Also, mark all the target columns as needing insert permissions.
*/
rte = pstate->p_target_rangetblentry;
qry->targetList = NIL;
Assert(list_length(exprList) <= list_length(icolumns));
forthree(lc, exprList, icols, icolumns, attnos, attrnos)
{
2006-10-04 02:30:14 +02:00
Expr *expr = (Expr *) lfirst(lc);
ResTarget *col = lfirst_node(ResTarget, icols);
AttrNumber attr_num = (AttrNumber) lfirst_int(attnos);
TargetEntry *tle;
tle = makeTargetEntry(expr,
attr_num,
col->name,
false);
qry->targetList = lappend(qry->targetList, tle);
2002-04-05 13:56:55 +02:00
rte->insertedCols = bms_add_member(rte->insertedCols,
attr_num - FirstLowInvalidHeapAttributeNumber);
}
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
/* Process ON CONFLICT, if any. */
if (stmt->onConflictClause)
qry->onConflict = transformOnConflictClause(pstate,
stmt->onConflictClause);
/*
2006-10-04 02:30:14 +02:00
* If we have a RETURNING clause, we need to add the target relation to
* the query namespace before processing it, so that Var references in
* RETURNING will work. Also, remove any namespace entries added in a
* sub-SELECT or VALUES list.
*/
if (stmt->returningList)
{
pstate->p_namespace = NIL;
addRTEtoQuery(pstate, pstate->p_target_rangetblentry,
false, true, true);
qry->returningList = transformReturningList(pstate,
stmt->returningList);
}
/* done building the range table and jointree */
qry->rtable = pstate->p_rtable;
qry->jointree = makeFromExpr(pstate->p_joinlist, NULL);
Improve parser's and planner's handling of set-returning functions. Teach the parser to reject misplaced set-returning functions during parse analysis using p_expr_kind, in much the same way as we do for aggregates and window functions (cf commit eaccfded9). While this isn't complete (it misses nesting-based restrictions), it's much better than the previous error reporting for such cases, and it allows elimination of assorted ad-hoc expression_returns_set() error checks. We could add nesting checks later if it seems important to catch all cases at parse time. There is one case the parser will now throw error for although previous versions allowed it, which is SRFs in the tlist of an UPDATE. That never behaved sensibly (since it's ill-defined which generated row should be used to perform the update) and it's hard to see why it should not be treated as an error. It's a release-note-worthy change though. Also, add a new Query field hasTargetSRFs reporting whether there are any SRFs in the targetlist (including GROUP BY/ORDER BY expressions). The parser can now set that basically for free during parse analysis, and we can use it in a number of places to avoid expression_returns_set searches. (There will be more such checks soon.) In some places, this allows decontorting the logic since it's no longer expensive to check for SRFs in the tlist --- so I made the checks parallel to the handling of hasAggs/hasWindowFuncs wherever it seemed appropriate. catversion bump because adding a Query field changes stored rules. Andres Freund and Tom Lane Discussion: <24639.1473782855@sss.pgh.pa.us>
2016-09-13 19:54:24 +02:00
qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasSubLinks = pstate->p_hasSubLinks;
assign_query_collations(pstate, qry);
return qry;
}
/*
* Prepare an INSERT row for assignment to the target table.
*
Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better. Previously, if an INSERT with multiple rows of VALUES had indirection (array subscripting or field selection) in its target-columns list, the parser handled that by applying transformAssignedExpr() to each element of each VALUES row independently. This led to having ArrayRef assignment nodes or FieldStore nodes in each row of the VALUES RTE. That works for simple cases, but in bug #14265 Nuri Boardman points out that it fails if there are multiple assignments to elements/fields of the same target column. For such cases to work, rewriteTargetListIU() has to nest the ArrayRefs or FieldStores together to produce a single expression to be assigned to the column. But it failed to find them in the top-level targetlist and issued an error about "multiple assignments to same column". We could possibly fix this by teaching the rewriter to apply rewriteTargetListIU to each VALUES row separately, but that would be messy (it would change the output rowtype of the VALUES RTE, for example) and inefficient. Instead, let's fix the parser so that the VALUES RTE outputs are just the user-specified values, cast to the right type if necessary, and then the ArrayRefs or FieldStores are applied in the top-level targetlist to Vars representing the RTE's outputs. This is the same parsetree representation already used for similar cases with INSERT/SELECT syntax, so it allows simplifications in ruleutils.c, which no longer needs to treat INSERT-from-multiple-VALUES as its own special case. This implementation works by applying transformAssignedExpr to the VALUES entries as before, and then stripping off any ArrayRefs or FieldStores it adds. With lots of VALUES rows it would be noticeably more efficient to not add those nodes in the first place. But that's just an optimization not a bug fix, and there doesn't seem to be any good way to do it without significant refactoring. (A non-invasive answer would be to apply transformAssignedExpr + stripping to just the first VALUES row, and then just forcibly cast remaining rows to the same data types exposed in the first row. But this way would lead to different, not-INSERT-specific errors being reported in casting failure cases, so it doesn't seem very nice.) So leave that for later; this patch at least isn't making the per-row parsing work worse, and it does make the finished parsetree smaller, saving rewriter and planner work. Catversion bump because stored rules containing such INSERTs would need to change. Because of that, no back-patch, even though this is a very long-standing bug. Report: <20160727005725.7438.26021@wrigleys.postgresql.org> Discussion: <9578.1469645245@sss.pgh.pa.us>
2016-08-03 22:37:03 +02:00
* exprlist: transformed expressions for source values; these might come from
* a VALUES row, or be Vars referencing a sub-SELECT or VALUES RTE output.
* stmtcols: original target-columns spec for INSERT (we just test for NIL)
* icolumns: effective target-columns spec (list of ResTarget)
* attrnos: integer column numbers (must be same length as icolumns)
* strip_indirection: if true, remove any field/array assignment nodes
*/
static List *
transformInsertRow(ParseState *pstate, List *exprlist,
Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better. Previously, if an INSERT with multiple rows of VALUES had indirection (array subscripting or field selection) in its target-columns list, the parser handled that by applying transformAssignedExpr() to each element of each VALUES row independently. This led to having ArrayRef assignment nodes or FieldStore nodes in each row of the VALUES RTE. That works for simple cases, but in bug #14265 Nuri Boardman points out that it fails if there are multiple assignments to elements/fields of the same target column. For such cases to work, rewriteTargetListIU() has to nest the ArrayRefs or FieldStores together to produce a single expression to be assigned to the column. But it failed to find them in the top-level targetlist and issued an error about "multiple assignments to same column". We could possibly fix this by teaching the rewriter to apply rewriteTargetListIU to each VALUES row separately, but that would be messy (it would change the output rowtype of the VALUES RTE, for example) and inefficient. Instead, let's fix the parser so that the VALUES RTE outputs are just the user-specified values, cast to the right type if necessary, and then the ArrayRefs or FieldStores are applied in the top-level targetlist to Vars representing the RTE's outputs. This is the same parsetree representation already used for similar cases with INSERT/SELECT syntax, so it allows simplifications in ruleutils.c, which no longer needs to treat INSERT-from-multiple-VALUES as its own special case. This implementation works by applying transformAssignedExpr to the VALUES entries as before, and then stripping off any ArrayRefs or FieldStores it adds. With lots of VALUES rows it would be noticeably more efficient to not add those nodes in the first place. But that's just an optimization not a bug fix, and there doesn't seem to be any good way to do it without significant refactoring. (A non-invasive answer would be to apply transformAssignedExpr + stripping to just the first VALUES row, and then just forcibly cast remaining rows to the same data types exposed in the first row. But this way would lead to different, not-INSERT-specific errors being reported in casting failure cases, so it doesn't seem very nice.) So leave that for later; this patch at least isn't making the per-row parsing work worse, and it does make the finished parsetree smaller, saving rewriter and planner work. Catversion bump because stored rules containing such INSERTs would need to change. Because of that, no back-patch, even though this is a very long-standing bug. Report: <20160727005725.7438.26021@wrigleys.postgresql.org> Discussion: <9578.1469645245@sss.pgh.pa.us>
2016-08-03 22:37:03 +02:00
List *stmtcols, List *icolumns, List *attrnos,
bool strip_indirection)
{
List *result;
ListCell *lc;
ListCell *icols;
ListCell *attnos;
/*
* Check length of expr list. It must not have more expressions than
* there are target columns. We allow fewer, but only if no explicit
* columns list was given (the remaining columns are implicitly
* defaulted). Note we must check this *after* transformation because
* that could expand '*' into multiple items.
*/
if (list_length(exprlist) > list_length(icolumns))
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("INSERT has more expressions than target columns"),
parser_errposition(pstate,
exprLocation(list_nth(exprlist,
list_length(icolumns))))));
if (stmtcols != NIL &&
list_length(exprlist) < list_length(icolumns))
{
/*
* We can get here for cases like INSERT ... SELECT (a,b,c) FROM ...
* where the user accidentally created a RowExpr instead of separate
* columns. Add a suitable hint if that seems to be the problem,
* because the main error message is quite misleading for this case.
* (If there's no stmtcols, you'll get something about data type
2011-04-10 17:42:00 +02:00
* mismatch, which is less misleading so we don't worry about giving a
* hint in that case.)
*/
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("INSERT has more target columns than expressions"),
((list_length(exprlist) == 1 &&
count_rowexpr_columns(pstate, linitial(exprlist)) ==
list_length(icolumns)) ?
errhint("The insertion source is a row expression containing the same number of columns expected by the INSERT. Did you accidentally use extra parentheses?") : 0),
parser_errposition(pstate,
exprLocation(list_nth(icolumns,
list_length(exprlist))))));
}
/*
* Prepare columns for assignment to target table.
*/
result = NIL;
forthree(lc, exprlist, icols, icolumns, attnos, attrnos)
{
Expr *expr = (Expr *) lfirst(lc);
ResTarget *col = lfirst_node(ResTarget, icols);
int attno = lfirst_int(attnos);
expr = transformAssignedExpr(pstate, expr,
EXPR_KIND_INSERT_TARGET,
col->name,
attno,
col->indirection,
col->location);
Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better. Previously, if an INSERT with multiple rows of VALUES had indirection (array subscripting or field selection) in its target-columns list, the parser handled that by applying transformAssignedExpr() to each element of each VALUES row independently. This led to having ArrayRef assignment nodes or FieldStore nodes in each row of the VALUES RTE. That works for simple cases, but in bug #14265 Nuri Boardman points out that it fails if there are multiple assignments to elements/fields of the same target column. For such cases to work, rewriteTargetListIU() has to nest the ArrayRefs or FieldStores together to produce a single expression to be assigned to the column. But it failed to find them in the top-level targetlist and issued an error about "multiple assignments to same column". We could possibly fix this by teaching the rewriter to apply rewriteTargetListIU to each VALUES row separately, but that would be messy (it would change the output rowtype of the VALUES RTE, for example) and inefficient. Instead, let's fix the parser so that the VALUES RTE outputs are just the user-specified values, cast to the right type if necessary, and then the ArrayRefs or FieldStores are applied in the top-level targetlist to Vars representing the RTE's outputs. This is the same parsetree representation already used for similar cases with INSERT/SELECT syntax, so it allows simplifications in ruleutils.c, which no longer needs to treat INSERT-from-multiple-VALUES as its own special case. This implementation works by applying transformAssignedExpr to the VALUES entries as before, and then stripping off any ArrayRefs or FieldStores it adds. With lots of VALUES rows it would be noticeably more efficient to not add those nodes in the first place. But that's just an optimization not a bug fix, and there doesn't seem to be any good way to do it without significant refactoring. (A non-invasive answer would be to apply transformAssignedExpr + stripping to just the first VALUES row, and then just forcibly cast remaining rows to the same data types exposed in the first row. But this way would lead to different, not-INSERT-specific errors being reported in casting failure cases, so it doesn't seem very nice.) So leave that for later; this patch at least isn't making the per-row parsing work worse, and it does make the finished parsetree smaller, saving rewriter and planner work. Catversion bump because stored rules containing such INSERTs would need to change. Because of that, no back-patch, even though this is a very long-standing bug. Report: <20160727005725.7438.26021@wrigleys.postgresql.org> Discussion: <9578.1469645245@sss.pgh.pa.us>
2016-08-03 22:37:03 +02:00
if (strip_indirection)
{
while (expr)
{
if (IsA(expr, FieldStore))
{
FieldStore *fstore = (FieldStore *) expr;
expr = (Expr *) linitial(fstore->newvals);
}
else if (IsA(expr, SubscriptingRef))
Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better. Previously, if an INSERT with multiple rows of VALUES had indirection (array subscripting or field selection) in its target-columns list, the parser handled that by applying transformAssignedExpr() to each element of each VALUES row independently. This led to having ArrayRef assignment nodes or FieldStore nodes in each row of the VALUES RTE. That works for simple cases, but in bug #14265 Nuri Boardman points out that it fails if there are multiple assignments to elements/fields of the same target column. For such cases to work, rewriteTargetListIU() has to nest the ArrayRefs or FieldStores together to produce a single expression to be assigned to the column. But it failed to find them in the top-level targetlist and issued an error about "multiple assignments to same column". We could possibly fix this by teaching the rewriter to apply rewriteTargetListIU to each VALUES row separately, but that would be messy (it would change the output rowtype of the VALUES RTE, for example) and inefficient. Instead, let's fix the parser so that the VALUES RTE outputs are just the user-specified values, cast to the right type if necessary, and then the ArrayRefs or FieldStores are applied in the top-level targetlist to Vars representing the RTE's outputs. This is the same parsetree representation already used for similar cases with INSERT/SELECT syntax, so it allows simplifications in ruleutils.c, which no longer needs to treat INSERT-from-multiple-VALUES as its own special case. This implementation works by applying transformAssignedExpr to the VALUES entries as before, and then stripping off any ArrayRefs or FieldStores it adds. With lots of VALUES rows it would be noticeably more efficient to not add those nodes in the first place. But that's just an optimization not a bug fix, and there doesn't seem to be any good way to do it without significant refactoring. (A non-invasive answer would be to apply transformAssignedExpr + stripping to just the first VALUES row, and then just forcibly cast remaining rows to the same data types exposed in the first row. But this way would lead to different, not-INSERT-specific errors being reported in casting failure cases, so it doesn't seem very nice.) So leave that for later; this patch at least isn't making the per-row parsing work worse, and it does make the finished parsetree smaller, saving rewriter and planner work. Catversion bump because stored rules containing such INSERTs would need to change. Because of that, no back-patch, even though this is a very long-standing bug. Report: <20160727005725.7438.26021@wrigleys.postgresql.org> Discussion: <9578.1469645245@sss.pgh.pa.us>
2016-08-03 22:37:03 +02:00
{
SubscriptingRef *sbsref = (SubscriptingRef *) expr;
Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better. Previously, if an INSERT with multiple rows of VALUES had indirection (array subscripting or field selection) in its target-columns list, the parser handled that by applying transformAssignedExpr() to each element of each VALUES row independently. This led to having ArrayRef assignment nodes or FieldStore nodes in each row of the VALUES RTE. That works for simple cases, but in bug #14265 Nuri Boardman points out that it fails if there are multiple assignments to elements/fields of the same target column. For such cases to work, rewriteTargetListIU() has to nest the ArrayRefs or FieldStores together to produce a single expression to be assigned to the column. But it failed to find them in the top-level targetlist and issued an error about "multiple assignments to same column". We could possibly fix this by teaching the rewriter to apply rewriteTargetListIU to each VALUES row separately, but that would be messy (it would change the output rowtype of the VALUES RTE, for example) and inefficient. Instead, let's fix the parser so that the VALUES RTE outputs are just the user-specified values, cast to the right type if necessary, and then the ArrayRefs or FieldStores are applied in the top-level targetlist to Vars representing the RTE's outputs. This is the same parsetree representation already used for similar cases with INSERT/SELECT syntax, so it allows simplifications in ruleutils.c, which no longer needs to treat INSERT-from-multiple-VALUES as its own special case. This implementation works by applying transformAssignedExpr to the VALUES entries as before, and then stripping off any ArrayRefs or FieldStores it adds. With lots of VALUES rows it would be noticeably more efficient to not add those nodes in the first place. But that's just an optimization not a bug fix, and there doesn't seem to be any good way to do it without significant refactoring. (A non-invasive answer would be to apply transformAssignedExpr + stripping to just the first VALUES row, and then just forcibly cast remaining rows to the same data types exposed in the first row. But this way would lead to different, not-INSERT-specific errors being reported in casting failure cases, so it doesn't seem very nice.) So leave that for later; this patch at least isn't making the per-row parsing work worse, and it does make the finished parsetree smaller, saving rewriter and planner work. Catversion bump because stored rules containing such INSERTs would need to change. Because of that, no back-patch, even though this is a very long-standing bug. Report: <20160727005725.7438.26021@wrigleys.postgresql.org> Discussion: <9578.1469645245@sss.pgh.pa.us>
2016-08-03 22:37:03 +02:00
if (sbsref->refassgnexpr == NULL)
Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better. Previously, if an INSERT with multiple rows of VALUES had indirection (array subscripting or field selection) in its target-columns list, the parser handled that by applying transformAssignedExpr() to each element of each VALUES row independently. This led to having ArrayRef assignment nodes or FieldStore nodes in each row of the VALUES RTE. That works for simple cases, but in bug #14265 Nuri Boardman points out that it fails if there are multiple assignments to elements/fields of the same target column. For such cases to work, rewriteTargetListIU() has to nest the ArrayRefs or FieldStores together to produce a single expression to be assigned to the column. But it failed to find them in the top-level targetlist and issued an error about "multiple assignments to same column". We could possibly fix this by teaching the rewriter to apply rewriteTargetListIU to each VALUES row separately, but that would be messy (it would change the output rowtype of the VALUES RTE, for example) and inefficient. Instead, let's fix the parser so that the VALUES RTE outputs are just the user-specified values, cast to the right type if necessary, and then the ArrayRefs or FieldStores are applied in the top-level targetlist to Vars representing the RTE's outputs. This is the same parsetree representation already used for similar cases with INSERT/SELECT syntax, so it allows simplifications in ruleutils.c, which no longer needs to treat INSERT-from-multiple-VALUES as its own special case. This implementation works by applying transformAssignedExpr to the VALUES entries as before, and then stripping off any ArrayRefs or FieldStores it adds. With lots of VALUES rows it would be noticeably more efficient to not add those nodes in the first place. But that's just an optimization not a bug fix, and there doesn't seem to be any good way to do it without significant refactoring. (A non-invasive answer would be to apply transformAssignedExpr + stripping to just the first VALUES row, and then just forcibly cast remaining rows to the same data types exposed in the first row. But this way would lead to different, not-INSERT-specific errors being reported in casting failure cases, so it doesn't seem very nice.) So leave that for later; this patch at least isn't making the per-row parsing work worse, and it does make the finished parsetree smaller, saving rewriter and planner work. Catversion bump because stored rules containing such INSERTs would need to change. Because of that, no back-patch, even though this is a very long-standing bug. Report: <20160727005725.7438.26021@wrigleys.postgresql.org> Discussion: <9578.1469645245@sss.pgh.pa.us>
2016-08-03 22:37:03 +02:00
break;
expr = sbsref->refassgnexpr;
Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better. Previously, if an INSERT with multiple rows of VALUES had indirection (array subscripting or field selection) in its target-columns list, the parser handled that by applying transformAssignedExpr() to each element of each VALUES row independently. This led to having ArrayRef assignment nodes or FieldStore nodes in each row of the VALUES RTE. That works for simple cases, but in bug #14265 Nuri Boardman points out that it fails if there are multiple assignments to elements/fields of the same target column. For such cases to work, rewriteTargetListIU() has to nest the ArrayRefs or FieldStores together to produce a single expression to be assigned to the column. But it failed to find them in the top-level targetlist and issued an error about "multiple assignments to same column". We could possibly fix this by teaching the rewriter to apply rewriteTargetListIU to each VALUES row separately, but that would be messy (it would change the output rowtype of the VALUES RTE, for example) and inefficient. Instead, let's fix the parser so that the VALUES RTE outputs are just the user-specified values, cast to the right type if necessary, and then the ArrayRefs or FieldStores are applied in the top-level targetlist to Vars representing the RTE's outputs. This is the same parsetree representation already used for similar cases with INSERT/SELECT syntax, so it allows simplifications in ruleutils.c, which no longer needs to treat INSERT-from-multiple-VALUES as its own special case. This implementation works by applying transformAssignedExpr to the VALUES entries as before, and then stripping off any ArrayRefs or FieldStores it adds. With lots of VALUES rows it would be noticeably more efficient to not add those nodes in the first place. But that's just an optimization not a bug fix, and there doesn't seem to be any good way to do it without significant refactoring. (A non-invasive answer would be to apply transformAssignedExpr + stripping to just the first VALUES row, and then just forcibly cast remaining rows to the same data types exposed in the first row. But this way would lead to different, not-INSERT-specific errors being reported in casting failure cases, so it doesn't seem very nice.) So leave that for later; this patch at least isn't making the per-row parsing work worse, and it does make the finished parsetree smaller, saving rewriter and planner work. Catversion bump because stored rules containing such INSERTs would need to change. Because of that, no back-patch, even though this is a very long-standing bug. Report: <20160727005725.7438.26021@wrigleys.postgresql.org> Discussion: <9578.1469645245@sss.pgh.pa.us>
2016-08-03 22:37:03 +02:00
}
else
break;
}
}
result = lappend(result, expr);
}
return result;
}
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
/*
* transformOnConflictClause -
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
* transforms an OnConflictClause in an INSERT
*/
static OnConflictExpr *
transformOnConflictClause(ParseState *pstate,
OnConflictClause *onConflictClause)
{
List *arbiterElems;
Node *arbiterWhere;
Oid arbiterConstraint;
List *onConflictSet = NIL;
Node *onConflictWhere = NULL;
RangeTblEntry *exclRte = NULL;
int exclRelIndex = 0;
List *exclRelTlist = NIL;
2015-05-24 03:35:49 +02:00
OnConflictExpr *result;
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
/* Process the arbiter clause, ON CONFLICT ON (...) */
transformOnConflictArbiter(pstate, onConflictClause, &arbiterElems,
&arbiterWhere, &arbiterConstraint);
/* Process DO UPDATE */
if (onConflictClause->action == ONCONFLICT_UPDATE)
{
Fix several bugs related to ON CONFLICT's EXCLUDED pseudo relation. Four related issues: 1) attnos/varnos/resnos for EXCLUDED were out of sync when a column after one dropped in the underlying relation was referenced. 2) References to whole-row variables (i.e. EXCLUDED.*) lead to errors. 3) It was possible to reference system columns in the EXCLUDED pseudo relations, even though they would not have valid contents. 4) References to EXCLUDED were rewritten by the RLS machinery, as EXCLUDED was treated as if it were the underlying relation. To fix the first two issues, generate the excluded targetlist with dropped columns in mind and add an entry for whole row variables. Instead of unconditionally adding a wholerow entry we could pull up the expression if needed, but doing it unconditionally seems simpler. The wholerow entry is only really needed for ruleutils/EXPLAIN support anyway. The remaining two issues are addressed by changing the EXCLUDED RTE to have relkind = composite. That fits with EXCLUDED not actually being a real relation, and allows to treat it differently in the relevant places. scanRTEForColumn now skips looking up system columns when the RTE has a composite relkind; fireRIRrules() already had a corresponding check, thereby preventing RLS expansion on EXCLUDED. Also add tests for these issues, and improve a few comments around excluded handling in setrefs.c. Reported-By: Peter Geoghegan, Geoff Winkless Author: Andres Freund, Amit Langote, Peter Geoghegan Discussion: CAEzk6fdzJ3xYQZGbcuYM2rBd2BuDkUksmK=mY9UYYDugg_GgZg@mail.gmail.com, CAM3SWZS+CauzbiCEcg-GdE6K6ycHE_Bz6Ksszy8AoixcMHOmsA@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
2015-10-03 15:12:10 +02:00
Relation targetrel = pstate->p_target_relation;
/*
* All INSERT expressions have been parsed, get ready for potentially
* existing SET statements that need to be processed like an UPDATE.
*/
pstate->p_is_insert = false;
Fix several bugs related to ON CONFLICT's EXCLUDED pseudo relation. Four related issues: 1) attnos/varnos/resnos for EXCLUDED were out of sync when a column after one dropped in the underlying relation was referenced. 2) References to whole-row variables (i.e. EXCLUDED.*) lead to errors. 3) It was possible to reference system columns in the EXCLUDED pseudo relations, even though they would not have valid contents. 4) References to EXCLUDED were rewritten by the RLS machinery, as EXCLUDED was treated as if it were the underlying relation. To fix the first two issues, generate the excluded targetlist with dropped columns in mind and add an entry for whole row variables. Instead of unconditionally adding a wholerow entry we could pull up the expression if needed, but doing it unconditionally seems simpler. The wholerow entry is only really needed for ruleutils/EXPLAIN support anyway. The remaining two issues are addressed by changing the EXCLUDED RTE to have relkind = composite. That fits with EXCLUDED not actually being a real relation, and allows to treat it differently in the relevant places. scanRTEForColumn now skips looking up system columns when the RTE has a composite relkind; fireRIRrules() already had a corresponding check, thereby preventing RLS expansion on EXCLUDED. Also add tests for these issues, and improve a few comments around excluded handling in setrefs.c. Reported-By: Peter Geoghegan, Geoff Winkless Author: Andres Freund, Amit Langote, Peter Geoghegan Discussion: CAEzk6fdzJ3xYQZGbcuYM2rBd2BuDkUksmK=mY9UYYDugg_GgZg@mail.gmail.com, CAM3SWZS+CauzbiCEcg-GdE6K6ycHE_Bz6Ksszy8AoixcMHOmsA@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
2015-10-03 15:12:10 +02:00
/*
Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *. When expanding an updatable view that is an INSERT's target, the rewriter failed to rewrite Vars in the ON CONFLICT UPDATE clause. This accidentally worked if the view was just "SELECT * FROM ...", as the transformation would be a no-op in that case. With more complicated view targetlists, this omission would often lead to "attribute ... has the wrong type" errors or even crashes, as reported by Mario De Frutos Dieguez. Fix by adding code to rewriteTargetView to fix up the data structure correctly. The easiest way to update the exclRelTlist list is to rebuild it from scratch looking at the new target relation, so factor the code for that out of transformOnConflictClause to make it sharable. In passing, avoid duplicate permissions checks against the EXCLUDED pseudo-relation, and prevent useless view expansion of that relation's dummy RTE. The latter is only known to happen (after this patch) in cases where the query would fail later due to not having any INSTEAD OF triggers for the view. But by exactly that token, it would create an unintended and very poorly tested state of the query data structure, so it seems like a good idea to prevent it from happening at all. This has been broken since ON CONFLICT was introduced, so back-patch to 9.5. Dean Rasheed, based on an earlier patch by Amit Langote; comment-kibitzing and back-patching by me Discussion: https://postgr.es/m/CAFYwGJ0xfzy8jaK80hVN2eUWr6huce0RU8AgU04MGD00igqkTg@mail.gmail.com
2018-08-05 01:38:58 +02:00
* Add range table entry for the EXCLUDED pseudo relation. relkind is
Fix several bugs related to ON CONFLICT's EXCLUDED pseudo relation. Four related issues: 1) attnos/varnos/resnos for EXCLUDED were out of sync when a column after one dropped in the underlying relation was referenced. 2) References to whole-row variables (i.e. EXCLUDED.*) lead to errors. 3) It was possible to reference system columns in the EXCLUDED pseudo relations, even though they would not have valid contents. 4) References to EXCLUDED were rewritten by the RLS machinery, as EXCLUDED was treated as if it were the underlying relation. To fix the first two issues, generate the excluded targetlist with dropped columns in mind and add an entry for whole row variables. Instead of unconditionally adding a wholerow entry we could pull up the expression if needed, but doing it unconditionally seems simpler. The wholerow entry is only really needed for ruleutils/EXPLAIN support anyway. The remaining two issues are addressed by changing the EXCLUDED RTE to have relkind = composite. That fits with EXCLUDED not actually being a real relation, and allows to treat it differently in the relevant places. scanRTEForColumn now skips looking up system columns when the RTE has a composite relkind; fireRIRrules() already had a corresponding check, thereby preventing RLS expansion on EXCLUDED. Also add tests for these issues, and improve a few comments around excluded handling in setrefs.c. Reported-By: Peter Geoghegan, Geoff Winkless Author: Andres Freund, Amit Langote, Peter Geoghegan Discussion: CAEzk6fdzJ3xYQZGbcuYM2rBd2BuDkUksmK=mY9UYYDugg_GgZg@mail.gmail.com, CAM3SWZS+CauzbiCEcg-GdE6K6ycHE_Bz6Ksszy8AoixcMHOmsA@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
2015-10-03 15:12:10 +02:00
* set to composite to signal that we're not dealing with an actual
Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *. When expanding an updatable view that is an INSERT's target, the rewriter failed to rewrite Vars in the ON CONFLICT UPDATE clause. This accidentally worked if the view was just "SELECT * FROM ...", as the transformation would be a no-op in that case. With more complicated view targetlists, this omission would often lead to "attribute ... has the wrong type" errors or even crashes, as reported by Mario De Frutos Dieguez. Fix by adding code to rewriteTargetView to fix up the data structure correctly. The easiest way to update the exclRelTlist list is to rebuild it from scratch looking at the new target relation, so factor the code for that out of transformOnConflictClause to make it sharable. In passing, avoid duplicate permissions checks against the EXCLUDED pseudo-relation, and prevent useless view expansion of that relation's dummy RTE. The latter is only known to happen (after this patch) in cases where the query would fail later due to not having any INSTEAD OF triggers for the view. But by exactly that token, it would create an unintended and very poorly tested state of the query data structure, so it seems like a good idea to prevent it from happening at all. This has been broken since ON CONFLICT was introduced, so back-patch to 9.5. Dean Rasheed, based on an earlier patch by Amit Langote; comment-kibitzing and back-patching by me Discussion: https://postgr.es/m/CAFYwGJ0xfzy8jaK80hVN2eUWr6huce0RU8AgU04MGD00igqkTg@mail.gmail.com
2018-08-05 01:38:58 +02:00
* relation, and no permission checks are required on it. (We'll
* check the actual target relation, instead.)
Fix several bugs related to ON CONFLICT's EXCLUDED pseudo relation. Four related issues: 1) attnos/varnos/resnos for EXCLUDED were out of sync when a column after one dropped in the underlying relation was referenced. 2) References to whole-row variables (i.e. EXCLUDED.*) lead to errors. 3) It was possible to reference system columns in the EXCLUDED pseudo relations, even though they would not have valid contents. 4) References to EXCLUDED were rewritten by the RLS machinery, as EXCLUDED was treated as if it were the underlying relation. To fix the first two issues, generate the excluded targetlist with dropped columns in mind and add an entry for whole row variables. Instead of unconditionally adding a wholerow entry we could pull up the expression if needed, but doing it unconditionally seems simpler. The wholerow entry is only really needed for ruleutils/EXPLAIN support anyway. The remaining two issues are addressed by changing the EXCLUDED RTE to have relkind = composite. That fits with EXCLUDED not actually being a real relation, and allows to treat it differently in the relevant places. scanRTEForColumn now skips looking up system columns when the RTE has a composite relkind; fireRIRrules() already had a corresponding check, thereby preventing RLS expansion on EXCLUDED. Also add tests for these issues, and improve a few comments around excluded handling in setrefs.c. Reported-By: Peter Geoghegan, Geoff Winkless Author: Andres Freund, Amit Langote, Peter Geoghegan Discussion: CAEzk6fdzJ3xYQZGbcuYM2rBd2BuDkUksmK=mY9UYYDugg_GgZg@mail.gmail.com, CAM3SWZS+CauzbiCEcg-GdE6K6ycHE_Bz6Ksszy8AoixcMHOmsA@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
2015-10-03 15:12:10 +02:00
*/
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
exclRte = addRangeTableEntryForRelation(pstate,
Fix several bugs related to ON CONFLICT's EXCLUDED pseudo relation. Four related issues: 1) attnos/varnos/resnos for EXCLUDED were out of sync when a column after one dropped in the underlying relation was referenced. 2) References to whole-row variables (i.e. EXCLUDED.*) lead to errors. 3) It was possible to reference system columns in the EXCLUDED pseudo relations, even though they would not have valid contents. 4) References to EXCLUDED were rewritten by the RLS machinery, as EXCLUDED was treated as if it were the underlying relation. To fix the first two issues, generate the excluded targetlist with dropped columns in mind and add an entry for whole row variables. Instead of unconditionally adding a wholerow entry we could pull up the expression if needed, but doing it unconditionally seems simpler. The wholerow entry is only really needed for ruleutils/EXPLAIN support anyway. The remaining two issues are addressed by changing the EXCLUDED RTE to have relkind = composite. That fits with EXCLUDED not actually being a real relation, and allows to treat it differently in the relevant places. scanRTEForColumn now skips looking up system columns when the RTE has a composite relkind; fireRIRrules() already had a corresponding check, thereby preventing RLS expansion on EXCLUDED. Also add tests for these issues, and improve a few comments around excluded handling in setrefs.c. Reported-By: Peter Geoghegan, Geoff Winkless Author: Andres Freund, Amit Langote, Peter Geoghegan Discussion: CAEzk6fdzJ3xYQZGbcuYM2rBd2BuDkUksmK=mY9UYYDugg_GgZg@mail.gmail.com, CAM3SWZS+CauzbiCEcg-GdE6K6ycHE_Bz6Ksszy8AoixcMHOmsA@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
2015-10-03 15:12:10 +02:00
targetrel,
RowExclusiveLock,
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
makeAlias("excluded", NIL),
false, false);
Fix several bugs related to ON CONFLICT's EXCLUDED pseudo relation. Four related issues: 1) attnos/varnos/resnos for EXCLUDED were out of sync when a column after one dropped in the underlying relation was referenced. 2) References to whole-row variables (i.e. EXCLUDED.*) lead to errors. 3) It was possible to reference system columns in the EXCLUDED pseudo relations, even though they would not have valid contents. 4) References to EXCLUDED were rewritten by the RLS machinery, as EXCLUDED was treated as if it were the underlying relation. To fix the first two issues, generate the excluded targetlist with dropped columns in mind and add an entry for whole row variables. Instead of unconditionally adding a wholerow entry we could pull up the expression if needed, but doing it unconditionally seems simpler. The wholerow entry is only really needed for ruleutils/EXPLAIN support anyway. The remaining two issues are addressed by changing the EXCLUDED RTE to have relkind = composite. That fits with EXCLUDED not actually being a real relation, and allows to treat it differently in the relevant places. scanRTEForColumn now skips looking up system columns when the RTE has a composite relkind; fireRIRrules() already had a corresponding check, thereby preventing RLS expansion on EXCLUDED. Also add tests for these issues, and improve a few comments around excluded handling in setrefs.c. Reported-By: Peter Geoghegan, Geoff Winkless Author: Andres Freund, Amit Langote, Peter Geoghegan Discussion: CAEzk6fdzJ3xYQZGbcuYM2rBd2BuDkUksmK=mY9UYYDugg_GgZg@mail.gmail.com, CAM3SWZS+CauzbiCEcg-GdE6K6ycHE_Bz6Ksszy8AoixcMHOmsA@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
2015-10-03 15:12:10 +02:00
exclRte->relkind = RELKIND_COMPOSITE_TYPE;
Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *. When expanding an updatable view that is an INSERT's target, the rewriter failed to rewrite Vars in the ON CONFLICT UPDATE clause. This accidentally worked if the view was just "SELECT * FROM ...", as the transformation would be a no-op in that case. With more complicated view targetlists, this omission would often lead to "attribute ... has the wrong type" errors or even crashes, as reported by Mario De Frutos Dieguez. Fix by adding code to rewriteTargetView to fix up the data structure correctly. The easiest way to update the exclRelTlist list is to rebuild it from scratch looking at the new target relation, so factor the code for that out of transformOnConflictClause to make it sharable. In passing, avoid duplicate permissions checks against the EXCLUDED pseudo-relation, and prevent useless view expansion of that relation's dummy RTE. The latter is only known to happen (after this patch) in cases where the query would fail later due to not having any INSTEAD OF triggers for the view. But by exactly that token, it would create an unintended and very poorly tested state of the query data structure, so it seems like a good idea to prevent it from happening at all. This has been broken since ON CONFLICT was introduced, so back-patch to 9.5. Dean Rasheed, based on an earlier patch by Amit Langote; comment-kibitzing and back-patching by me Discussion: https://postgr.es/m/CAFYwGJ0xfzy8jaK80hVN2eUWr6huce0RU8AgU04MGD00igqkTg@mail.gmail.com
2018-08-05 01:38:58 +02:00
exclRte->requiredPerms = 0;
/* other permissions fields in exclRte are already empty */
Fix several bugs related to ON CONFLICT's EXCLUDED pseudo relation. Four related issues: 1) attnos/varnos/resnos for EXCLUDED were out of sync when a column after one dropped in the underlying relation was referenced. 2) References to whole-row variables (i.e. EXCLUDED.*) lead to errors. 3) It was possible to reference system columns in the EXCLUDED pseudo relations, even though they would not have valid contents. 4) References to EXCLUDED were rewritten by the RLS machinery, as EXCLUDED was treated as if it were the underlying relation. To fix the first two issues, generate the excluded targetlist with dropped columns in mind and add an entry for whole row variables. Instead of unconditionally adding a wholerow entry we could pull up the expression if needed, but doing it unconditionally seems simpler. The wholerow entry is only really needed for ruleutils/EXPLAIN support anyway. The remaining two issues are addressed by changing the EXCLUDED RTE to have relkind = composite. That fits with EXCLUDED not actually being a real relation, and allows to treat it differently in the relevant places. scanRTEForColumn now skips looking up system columns when the RTE has a composite relkind; fireRIRrules() already had a corresponding check, thereby preventing RLS expansion on EXCLUDED. Also add tests for these issues, and improve a few comments around excluded handling in setrefs.c. Reported-By: Peter Geoghegan, Geoff Winkless Author: Andres Freund, Amit Langote, Peter Geoghegan Discussion: CAEzk6fdzJ3xYQZGbcuYM2rBd2BuDkUksmK=mY9UYYDugg_GgZg@mail.gmail.com, CAM3SWZS+CauzbiCEcg-GdE6K6ycHE_Bz6Ksszy8AoixcMHOmsA@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
2015-10-03 15:12:10 +02:00
Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *. When expanding an updatable view that is an INSERT's target, the rewriter failed to rewrite Vars in the ON CONFLICT UPDATE clause. This accidentally worked if the view was just "SELECT * FROM ...", as the transformation would be a no-op in that case. With more complicated view targetlists, this omission would often lead to "attribute ... has the wrong type" errors or even crashes, as reported by Mario De Frutos Dieguez. Fix by adding code to rewriteTargetView to fix up the data structure correctly. The easiest way to update the exclRelTlist list is to rebuild it from scratch looking at the new target relation, so factor the code for that out of transformOnConflictClause to make it sharable. In passing, avoid duplicate permissions checks against the EXCLUDED pseudo-relation, and prevent useless view expansion of that relation's dummy RTE. The latter is only known to happen (after this patch) in cases where the query would fail later due to not having any INSTEAD OF triggers for the view. But by exactly that token, it would create an unintended and very poorly tested state of the query data structure, so it seems like a good idea to prevent it from happening at all. This has been broken since ON CONFLICT was introduced, so back-patch to 9.5. Dean Rasheed, based on an earlier patch by Amit Langote; comment-kibitzing and back-patching by me Discussion: https://postgr.es/m/CAFYwGJ0xfzy8jaK80hVN2eUWr6huce0RU8AgU04MGD00igqkTg@mail.gmail.com
2018-08-05 01:38:58 +02:00
exclRelIndex = list_length(pstate->p_rtable);
Fix several bugs related to ON CONFLICT's EXCLUDED pseudo relation. Four related issues: 1) attnos/varnos/resnos for EXCLUDED were out of sync when a column after one dropped in the underlying relation was referenced. 2) References to whole-row variables (i.e. EXCLUDED.*) lead to errors. 3) It was possible to reference system columns in the EXCLUDED pseudo relations, even though they would not have valid contents. 4) References to EXCLUDED were rewritten by the RLS machinery, as EXCLUDED was treated as if it were the underlying relation. To fix the first two issues, generate the excluded targetlist with dropped columns in mind and add an entry for whole row variables. Instead of unconditionally adding a wholerow entry we could pull up the expression if needed, but doing it unconditionally seems simpler. The wholerow entry is only really needed for ruleutils/EXPLAIN support anyway. The remaining two issues are addressed by changing the EXCLUDED RTE to have relkind = composite. That fits with EXCLUDED not actually being a real relation, and allows to treat it differently in the relevant places. scanRTEForColumn now skips looking up system columns when the RTE has a composite relkind; fireRIRrules() already had a corresponding check, thereby preventing RLS expansion on EXCLUDED. Also add tests for these issues, and improve a few comments around excluded handling in setrefs.c. Reported-By: Peter Geoghegan, Geoff Winkless Author: Andres Freund, Amit Langote, Peter Geoghegan Discussion: CAEzk6fdzJ3xYQZGbcuYM2rBd2BuDkUksmK=mY9UYYDugg_GgZg@mail.gmail.com, CAM3SWZS+CauzbiCEcg-GdE6K6ycHE_Bz6Ksszy8AoixcMHOmsA@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
2015-10-03 15:12:10 +02:00
Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *. When expanding an updatable view that is an INSERT's target, the rewriter failed to rewrite Vars in the ON CONFLICT UPDATE clause. This accidentally worked if the view was just "SELECT * FROM ...", as the transformation would be a no-op in that case. With more complicated view targetlists, this omission would often lead to "attribute ... has the wrong type" errors or even crashes, as reported by Mario De Frutos Dieguez. Fix by adding code to rewriteTargetView to fix up the data structure correctly. The easiest way to update the exclRelTlist list is to rebuild it from scratch looking at the new target relation, so factor the code for that out of transformOnConflictClause to make it sharable. In passing, avoid duplicate permissions checks against the EXCLUDED pseudo-relation, and prevent useless view expansion of that relation's dummy RTE. The latter is only known to happen (after this patch) in cases where the query would fail later due to not having any INSTEAD OF triggers for the view. But by exactly that token, it would create an unintended and very poorly tested state of the query data structure, so it seems like a good idea to prevent it from happening at all. This has been broken since ON CONFLICT was introduced, so back-patch to 9.5. Dean Rasheed, based on an earlier patch by Amit Langote; comment-kibitzing and back-patching by me Discussion: https://postgr.es/m/CAFYwGJ0xfzy8jaK80hVN2eUWr6huce0RU8AgU04MGD00igqkTg@mail.gmail.com
2018-08-05 01:38:58 +02:00
/* Create EXCLUDED rel's targetlist for use by EXPLAIN */
exclRelTlist = BuildOnConflictExcludedTargetlist(targetrel,
exclRelIndex);
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
/*
* Add EXCLUDED and the target RTE to the namespace, so that they can
Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *. When expanding an updatable view that is an INSERT's target, the rewriter failed to rewrite Vars in the ON CONFLICT UPDATE clause. This accidentally worked if the view was just "SELECT * FROM ...", as the transformation would be a no-op in that case. With more complicated view targetlists, this omission would often lead to "attribute ... has the wrong type" errors or even crashes, as reported by Mario De Frutos Dieguez. Fix by adding code to rewriteTargetView to fix up the data structure correctly. The easiest way to update the exclRelTlist list is to rebuild it from scratch looking at the new target relation, so factor the code for that out of transformOnConflictClause to make it sharable. In passing, avoid duplicate permissions checks against the EXCLUDED pseudo-relation, and prevent useless view expansion of that relation's dummy RTE. The latter is only known to happen (after this patch) in cases where the query would fail later due to not having any INSTEAD OF triggers for the view. But by exactly that token, it would create an unintended and very poorly tested state of the query data structure, so it seems like a good idea to prevent it from happening at all. This has been broken since ON CONFLICT was introduced, so back-patch to 9.5. Dean Rasheed, based on an earlier patch by Amit Langote; comment-kibitzing and back-patching by me Discussion: https://postgr.es/m/CAFYwGJ0xfzy8jaK80hVN2eUWr6huce0RU8AgU04MGD00igqkTg@mail.gmail.com
2018-08-05 01:38:58 +02:00
* be used in the UPDATE subexpressions.
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
*/
addRTEtoQuery(pstate, exclRte, false, true, true);
addRTEtoQuery(pstate, pstate->p_target_rangetblentry,
false, true, true);
Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *. When expanding an updatable view that is an INSERT's target, the rewriter failed to rewrite Vars in the ON CONFLICT UPDATE clause. This accidentally worked if the view was just "SELECT * FROM ...", as the transformation would be a no-op in that case. With more complicated view targetlists, this omission would often lead to "attribute ... has the wrong type" errors or even crashes, as reported by Mario De Frutos Dieguez. Fix by adding code to rewriteTargetView to fix up the data structure correctly. The easiest way to update the exclRelTlist list is to rebuild it from scratch looking at the new target relation, so factor the code for that out of transformOnConflictClause to make it sharable. In passing, avoid duplicate permissions checks against the EXCLUDED pseudo-relation, and prevent useless view expansion of that relation's dummy RTE. The latter is only known to happen (after this patch) in cases where the query would fail later due to not having any INSTEAD OF triggers for the view. But by exactly that token, it would create an unintended and very poorly tested state of the query data structure, so it seems like a good idea to prevent it from happening at all. This has been broken since ON CONFLICT was introduced, so back-patch to 9.5. Dean Rasheed, based on an earlier patch by Amit Langote; comment-kibitzing and back-patching by me Discussion: https://postgr.es/m/CAFYwGJ0xfzy8jaK80hVN2eUWr6huce0RU8AgU04MGD00igqkTg@mail.gmail.com
2018-08-05 01:38:58 +02:00
/*
* Now transform the UPDATE subexpressions.
*/
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
onConflictSet =
transformUpdateTargetList(pstate, onConflictClause->targetList);
onConflictWhere = transformWhereClause(pstate,
onConflictClause->whereClause,
EXPR_KIND_WHERE, "WHERE");
}
/* Finally, build ON CONFLICT DO [NOTHING | UPDATE] expression */
result = makeNode(OnConflictExpr);
result->action = onConflictClause->action;
result->arbiterElems = arbiterElems;
result->arbiterWhere = arbiterWhere;
result->constraint = arbiterConstraint;
result->onConflictSet = onConflictSet;
result->onConflictWhere = onConflictWhere;
result->exclRelIndex = exclRelIndex;
result->exclRelTlist = exclRelTlist;
return result;
}
Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *. When expanding an updatable view that is an INSERT's target, the rewriter failed to rewrite Vars in the ON CONFLICT UPDATE clause. This accidentally worked if the view was just "SELECT * FROM ...", as the transformation would be a no-op in that case. With more complicated view targetlists, this omission would often lead to "attribute ... has the wrong type" errors or even crashes, as reported by Mario De Frutos Dieguez. Fix by adding code to rewriteTargetView to fix up the data structure correctly. The easiest way to update the exclRelTlist list is to rebuild it from scratch looking at the new target relation, so factor the code for that out of transformOnConflictClause to make it sharable. In passing, avoid duplicate permissions checks against the EXCLUDED pseudo-relation, and prevent useless view expansion of that relation's dummy RTE. The latter is only known to happen (after this patch) in cases where the query would fail later due to not having any INSTEAD OF triggers for the view. But by exactly that token, it would create an unintended and very poorly tested state of the query data structure, so it seems like a good idea to prevent it from happening at all. This has been broken since ON CONFLICT was introduced, so back-patch to 9.5. Dean Rasheed, based on an earlier patch by Amit Langote; comment-kibitzing and back-patching by me Discussion: https://postgr.es/m/CAFYwGJ0xfzy8jaK80hVN2eUWr6huce0RU8AgU04MGD00igqkTg@mail.gmail.com
2018-08-05 01:38:58 +02:00
/*
* BuildOnConflictExcludedTargetlist
* Create target list for the EXCLUDED pseudo-relation of ON CONFLICT,
* representing the columns of targetrel with varno exclRelIndex.
*
* Note: Exported for use in the rewriter.
*/
List *
BuildOnConflictExcludedTargetlist(Relation targetrel,
Index exclRelIndex)
{
List *result = NIL;
int attno;
Var *var;
TargetEntry *te;
/*
* Note that resnos of the tlist must correspond to attnos of the
* underlying relation, hence we need entries for dropped columns too.
*/
for (attno = 0; attno < RelationGetNumberOfAttributes(targetrel); attno++)
{
Form_pg_attribute attr = TupleDescAttr(targetrel->rd_att, attno);
char *name;
if (attr->attisdropped)
{
/*
* can't use atttypid here, but it doesn't really matter what type
* the Const claims to be.
*/
var = (Var *) makeNullConst(INT4OID, -1, InvalidOid);
Fix some minor issues exposed by outfuncs/readfuncs testing. A test patch to pass parse and plan trees through outfuncs + readfuncs exposed several issues that need to be fixed to get clean matches: Query.withCheckOptions failed to get copied; it's intentionally ignored by outfuncs/readfuncs on the grounds that it'd always be NIL anyway in stored rules. This seems less than future-proof, and it's not even saving very much, so just undo the decision and treat the field like all others. Several places that convert a view RTE into a subquery RTE, or similar manipulations, failed to clear out fields that were specific to the original RTE type and should be zero in a subquery RTE. Since readfuncs.c will leave such fields as zero, equalfuncs.c thinks the nodes are different leading to a reported mismatch. It seems like a good idea to clear out the no-longer-needed fields, even though in principle nothing should look at them; the node ought to be indistinguishable from how it would look if we'd built a new node instead of scribbling on the old one. BuildOnConflictExcludedTargetlist randomly set the resname of some TargetEntries to "" not NULL. outfuncs/readfuncs don't distinguish those cases, and so the string will read back in as NULL ... but equalfuncs.c does distinguish. Perhaps we ought to try to make things more consistent in this area --- but it's just useless extra code space for BuildOnConflictExcludedTargetlist to not use NULL here, so I fixed it for now by making it do that. catversion bumped because the change in handling of Query.withCheckOptions affects stored rules. Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
2018-09-18 21:08:28 +02:00
name = NULL;
Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *. When expanding an updatable view that is an INSERT's target, the rewriter failed to rewrite Vars in the ON CONFLICT UPDATE clause. This accidentally worked if the view was just "SELECT * FROM ...", as the transformation would be a no-op in that case. With more complicated view targetlists, this omission would often lead to "attribute ... has the wrong type" errors or even crashes, as reported by Mario De Frutos Dieguez. Fix by adding code to rewriteTargetView to fix up the data structure correctly. The easiest way to update the exclRelTlist list is to rebuild it from scratch looking at the new target relation, so factor the code for that out of transformOnConflictClause to make it sharable. In passing, avoid duplicate permissions checks against the EXCLUDED pseudo-relation, and prevent useless view expansion of that relation's dummy RTE. The latter is only known to happen (after this patch) in cases where the query would fail later due to not having any INSTEAD OF triggers for the view. But by exactly that token, it would create an unintended and very poorly tested state of the query data structure, so it seems like a good idea to prevent it from happening at all. This has been broken since ON CONFLICT was introduced, so back-patch to 9.5. Dean Rasheed, based on an earlier patch by Amit Langote; comment-kibitzing and back-patching by me Discussion: https://postgr.es/m/CAFYwGJ0xfzy8jaK80hVN2eUWr6huce0RU8AgU04MGD00igqkTg@mail.gmail.com
2018-08-05 01:38:58 +02:00
}
else
{
var = makeVar(exclRelIndex, attno + 1,
attr->atttypid, attr->atttypmod,
attr->attcollation,
0);
name = pstrdup(NameStr(attr->attname));
}
te = makeTargetEntry((Expr *) var,
attno + 1,
name,
false);
result = lappend(result, te);
}
/*
* Add a whole-row-Var entry to support references to "EXCLUDED.*". Like
* the other entries in the EXCLUDED tlist, its resno must match the Var's
* varattno, else the wrong things happen while resolving references in
* setrefs.c. This is against normal conventions for targetlists, but
* it's okay since we don't use this as a real tlist.
*/
var = makeVar(exclRelIndex, InvalidAttrNumber,
targetrel->rd_rel->reltype,
-1, InvalidOid, 0);
te = makeTargetEntry((Expr *) var, InvalidAttrNumber, NULL, true);
result = lappend(result, te);
return result;
}
/*
* count_rowexpr_columns -
* get number of columns contained in a ROW() expression;
* return -1 if expression isn't a RowExpr or a Var referencing one.
*
* This is currently used only for hint purposes, so we aren't terribly
* tense about recognizing all possible cases. The Var case is interesting
* because that's what we'll get in the INSERT ... SELECT (...) case.
*/
static int
count_rowexpr_columns(ParseState *pstate, Node *expr)
{
if (expr == NULL)
return -1;
if (IsA(expr, RowExpr))
return list_length(((RowExpr *) expr)->args);
if (IsA(expr, Var))
{
Var *var = (Var *) expr;
AttrNumber attnum = var->varattno;
if (attnum > 0 && var->vartype == RECORDOID)
{
RangeTblEntry *rte;
rte = GetRTEByRangeTablePosn(pstate, var->varno, var->varlevelsup);
if (rte->rtekind == RTE_SUBQUERY)
{
/* Subselect-in-FROM: examine sub-select's output expr */
TargetEntry *ste = get_tle_by_resno(rte->subquery->targetList,
attnum);
if (ste == NULL || ste->resjunk)
return -1;
expr = (Node *) ste->expr;
if (IsA(expr, RowExpr))
return list_length(((RowExpr *) expr)->args);
}
}
}
return -1;
}
/*
* transformSelectStmt -
* transforms a Select Statement
*
* Note: this covers only cases with no set operations and no VALUES lists;
* see below for the other cases.
*/
static Query *
1998-01-09 21:06:08 +01:00
transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
{
Query *qry = makeNode(Query);
Node *qual;
ListCell *l;
qry->commandType = CMD_SELECT;
/* process the WITH clause independently of all else */
if (stmt->withClause)
{
qry->hasRecursive = stmt->withClause->recursive;
qry->cteList = transformWithClause(pstate, stmt->withClause);
qry->hasModifyingCTE = pstate->p_hasModifyingCTE;
}
/* Complain if we get called from someplace where INTO is not allowed */
if (stmt->intoClause)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("SELECT ... INTO is not allowed here"),
parser_errposition(pstate,
exprLocation((Node *) stmt->intoClause))));
/* make FOR UPDATE/FOR SHARE info available to addRangeTableEntry */
pstate->p_locking_clause = stmt->lockingClause;
/* make WINDOW info available for window functions, too */
pstate->p_windowdefs = stmt->windowClause;
/* process the FROM clause */
transformFromClause(pstate, stmt->fromClause);
/* transform targetlist */
qry->targetList = transformTargetList(pstate, stmt->targetList,
EXPR_KIND_SELECT_TARGET);
/* mark column origins */
markTargetListOrigins(pstate, qry->targetList);
/* transform WHERE */
qual = transformWhereClause(pstate, stmt->whereClause,
EXPR_KIND_WHERE, "WHERE");
/* initial processing of HAVING clause is much like WHERE clause */
qry->havingQual = transformWhereClause(pstate, stmt->havingClause,
EXPR_KIND_HAVING, "HAVING");
/*
* Transform sorting/grouping stuff. Do ORDER BY first because both
* transformGroupClause and transformDistinctClause need the results. Note
* that these functions can also change the targetList, so it's passed to
* them by reference.
*/
qry->sortClause = transformSortClause(pstate,
stmt->sortClause,
&qry->targetList,
EXPR_KIND_ORDER_BY,
2010-02-26 03:01:40 +01:00
false /* allow SQL92 rules */ );
qry->groupClause = transformGroupClause(pstate,
stmt->groupClause,
Support GROUPING SETS, CUBE and ROLLUP. This SQL standard functionality allows to aggregate data by different GROUP BY clauses at once. Each grouping set returns rows with columns grouped by in other sets set to NULL. This could previously be achieved by doing each grouping as a separate query, conjoined by UNION ALLs. Besides being considerably more concise, grouping sets will in many cases be faster, requiring only one scan over the underlying data. The current implementation of grouping sets only supports using sorting for input. Individual sets that share a sort order are computed in one pass. If there are sets that don't share a sort order, additional sort & aggregation steps are performed. These additional passes are sourced by the previous sort step; thus avoiding repeated scans of the source data. The code is structured in a way that adding support for purely using hash aggregation or a mix of hashing and sorting is possible. Sorting was chosen to be supported first, as it is the most generic method of implementation. Instead of, as in an earlier versions of the patch, representing the chain of sort and aggregation steps as full blown planner and executor nodes, all but the first sort are performed inside the aggregation node itself. This avoids the need to do some unusual gymnastics to handle having to return aggregated and non-aggregated tuples from underlying nodes, as well as having to shut down underlying nodes early to limit memory usage. The optimizer still builds Sort/Agg node to describe each phase, but they're not part of the plan tree, but instead additional data for the aggregation node. They're a convenient and preexisting way to describe aggregation and sorting. The first (and possibly only) sort step is still performed as a separate execution step. That retains similarity with existing group by plans, makes rescans fairly simple, avoids very deep plans (leading to slow explains) and easily allows to avoid the sorting step if the underlying data is sorted by other means. A somewhat ugly side of this patch is having to deal with a grammar ambiguity between the new CUBE keyword and the cube extension/functions named cube (and rollup). To avoid breaking existing deployments of the cube extension it has not been renamed, neither has cube been made a reserved keyword. Instead precedence hacking is used to make GROUP BY cube(..) refer to the CUBE grouping sets feature, and not the function cube(). To actually group by a function cube(), unlikely as that might be, the function name has to be quoted. Needs a catversion bump because stored rules may change. Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
2015-05-16 03:40:59 +02:00
&qry->groupingSets,
&qry->targetList,
qry->sortClause,
EXPR_KIND_GROUP_BY,
2010-02-26 03:01:40 +01:00
false /* allow SQL92 rules */ );
if (stmt->distinctClause == NIL)
{
qry->distinctClause = NIL;
qry->hasDistinctOn = false;
}
else if (linitial(stmt->distinctClause) == NULL)
{
/* We had SELECT DISTINCT */
qry->distinctClause = transformDistinctClause(pstate,
&qry->targetList,
qry->sortClause,
false);
qry->hasDistinctOn = false;
}
else
{
/* We had SELECT DISTINCT ON */
qry->distinctClause = transformDistinctOnClause(pstate,
stmt->distinctClause,
&qry->targetList,
qry->sortClause);
qry->hasDistinctOn = true;
}
/* transform LIMIT */
qry->limitOffset = transformLimitClause(pstate, stmt->limitOffset,
EXPR_KIND_OFFSET, "OFFSET");
qry->limitCount = transformLimitClause(pstate, stmt->limitCount,
EXPR_KIND_LIMIT, "LIMIT");
/* transform window clauses after we have seen all window functions */
qry->windowClause = transformWindowDefinitions(pstate,
pstate->p_windowdefs,
&qry->targetList);
Change unknown-type literals to type text in SELECT and RETURNING lists. Previously, we left such literals alone if the query or subquery had no properties forcing a type decision to be made (such as an ORDER BY or DISTINCT clause using that output column). This meant that "unknown" could be an exposed output column type, which has never been a great idea because it could result in strange failures later on. For example, an outer query that tried to do any operations on an unknown-type subquery output would generally fail with some weird error like "failed to find conversion function from unknown to text" or "could not determine which collation to use for string comparison". Also, if the case occurred in a CREATE VIEW's query then the view would have an unknown-type column, causing similar failures in queries trying to use the view. To fix, at the tail end of parse analysis of a query, forcibly convert any remaining "unknown" literals in its SELECT or RETURNING list to type text. However, provide a switch to suppress that, and use it in the cases of SELECT inside a set operation or INSERT command. In those cases we already had type resolution rules that make use of context information from outside the subquery proper, and we don't want to change that behavior. Also, change creation of an unknown-type column in a relation from a warning to a hard error. The error should be unreachable now in CREATE VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown" in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because it's nothing but a foot-gun. This change creates a pg_upgrade failure case: a matview that contains an unknown-type column can't be pg_upgraded, because reparsing the matview's defining query will now decide that the column is of type text, which doesn't match the cstring-like storage that the old materialized column would actually have. Add a checking pass to detect that. While at it, we can detect tables or composite types that would fail, essentially for free. Those would fail safely anyway later on, but we might as well fail earlier. This patch is by me, but it owes something to previous investigations by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for review. Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
2017-01-25 15:17:18 +01:00
/* resolve any still-unresolved output columns as being type text */
if (pstate->p_resolve_unknowns)
resolveTargetListUnknowns(pstate, qry->targetList);
qry->rtable = pstate->p_rtable;
qry->jointree = makeFromExpr(pstate->p_joinlist, qual);
qry->hasSubLinks = pstate->p_hasSubLinks;
qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
Improve parser's and planner's handling of set-returning functions. Teach the parser to reject misplaced set-returning functions during parse analysis using p_expr_kind, in much the same way as we do for aggregates and window functions (cf commit eaccfded9). While this isn't complete (it misses nesting-based restrictions), it's much better than the previous error reporting for such cases, and it allows elimination of assorted ad-hoc expression_returns_set() error checks. We could add nesting checks later if it seems important to catch all cases at parse time. There is one case the parser will now throw error for although previous versions allowed it, which is SRFs in the tlist of an UPDATE. That never behaved sensibly (since it's ill-defined which generated row should be used to perform the update) and it's hard to see why it should not be treated as an error. It's a release-note-worthy change though. Also, add a new Query field hasTargetSRFs reporting whether there are any SRFs in the targetlist (including GROUP BY/ORDER BY expressions). The parser can now set that basically for free during parse analysis, and we can use it in a number of places to avoid expression_returns_set searches. (There will be more such checks soon.) In some places, this allows decontorting the logic since it's no longer expensive to check for SRFs in the tlist --- so I made the checks parallel to the handling of hasAggs/hasWindowFuncs wherever it seemed appropriate. catversion bump because adding a Query field changes stored rules. Andres Freund and Tom Lane Discussion: <24639.1473782855@sss.pgh.pa.us>
2016-09-13 19:54:24 +02:00
qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasAggs = pstate->p_hasAggs;
foreach(l, stmt->lockingClause)
{
transformLockingClause(pstate, qry,
(LockingClause *) lfirst(l), false);
}
assign_query_collations(pstate, qry);
/* this must be done after collations, for reliable comparison of exprs */
if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
parseCheckAggregates(pstate, qry);
return qry;
}
/*
* transformValuesClause -
* transforms a VALUES clause that's being used as a standalone SELECT
*
* We build a Query containing a VALUES RTE, rather as if one had written
* SELECT * FROM (VALUES ...) AS "*VALUES*"
*/
static Query *
transformValuesClause(ParseState *pstate, SelectStmt *stmt)
{
Query *qry = makeNode(Query);
List *exprsLists;
Fix reporting of column typmods for multi-row VALUES constructs. expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. It's cheap for transformValuesClause to determine the common typmod while transforming a multi-row VALUES, but it'd be less cheap for expandRTE() and get_rte_attribute_type() to re-determine that info every time they're asked --- possibly a lot less cheap, if the VALUES has many rows. Therefore, the best fix is to record the common typmods explicitly in a list in the VALUES RTE, as we were already doing for column collations. This looks quite a bit like what we're doing for CTE RTEs, so we can save a little bit of space and code by unifying the representation for those two RTE types. They both now share coltypes/coltypmods/colcollations fields. (At some point it might seem desirable to populate those fields for all RTE types; but right now it looks like constructing them for other RTE types would add more code and cycles than it would save.) The RTE change requires a catversion bump, so this fix is only usable in HEAD. If we fix this at all in the back branches, the patch will need to look quite different. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
List *coltypes = NIL;
List *coltypmods = NIL;
List *colcollations = NIL;
List **colexprs = NULL;
int sublist_length = -1;
bool lateral = false;
RangeTblEntry *rte;
ParseNamespaceItem *nsitem;
ListCell *lc;
2006-10-04 02:30:14 +02:00
ListCell *lc2;
int i;
qry->commandType = CMD_SELECT;
/* Most SELECT stuff doesn't apply in a VALUES clause */
Assert(stmt->distinctClause == NIL);
Assert(stmt->intoClause == NULL);
Assert(stmt->targetList == NIL);
Assert(stmt->fromClause == NIL);
Assert(stmt->whereClause == NULL);
Assert(stmt->groupClause == NIL);
Assert(stmt->havingClause == NULL);
Assert(stmt->windowClause == NIL);
Assert(stmt->op == SETOP_NONE);
/* process the WITH clause independently of all else */
if (stmt->withClause)
{
qry->hasRecursive = stmt->withClause->recursive;
qry->cteList = transformWithClause(pstate, stmt->withClause);
qry->hasModifyingCTE = pstate->p_hasModifyingCTE;
}
/*
Improve handling of "UPDATE ... SET (column_list) = row_constructor". Previously, the right-hand side of a multiple-column assignment, if it wasn't a sub-SELECT, had to be a simple parenthesized expression list, because gram.y was responsible for "bursting" the construct into independent column assignments. This had the minor defect that you couldn't write ROW (though you should be able to, since the standard says this is a row constructor), and the rather larger defect that unlike other uses of row constructors, we would not expand a "foo.*" item into multiple columns. Fix that by changing the RHS to be just "a_expr" in the grammar, leaving it to transformMultiAssignRef to separate the elements of a RowExpr; which it will do only after performing standard transformation of the RowExpr, so that "foo.*" behaves as expected. The key reason we didn't do that before was the hard-wired handling of DEFAULT tokens (SetToDefault nodes). This patch deals with that issue by allowing DEFAULT in any a_expr and having parse analysis throw an error if SetToDefault is found in an unexpected place. That's an improvement anyway since the error can be more specific than just "syntax error". The SQL standard suggests that the RHS could be any a_expr yielding a suitable row value. This patch doesn't really move the goal posts in that respect --- you're still limited to RowExpr or a sub-SELECT --- but it does fix the grammar restriction, so it provides some tangible progress towards a full implementation. And the limitation is now documented by an explicit error message rather than an unhelpful "syntax error". Discussion: <8542.1479742008@sss.pgh.pa.us>
2016-11-22 21:19:57 +01:00
* For each row of VALUES, transform the raw expressions.
*
* Note that the intermediate representation we build is column-organized
* not row-organized. That simplifies the type and collation processing
* below.
*/
foreach(lc, stmt->valuesLists)
{
2006-10-04 02:30:14 +02:00
List *sublist = (List *) lfirst(lc);
Improve handling of "UPDATE ... SET (column_list) = row_constructor". Previously, the right-hand side of a multiple-column assignment, if it wasn't a sub-SELECT, had to be a simple parenthesized expression list, because gram.y was responsible for "bursting" the construct into independent column assignments. This had the minor defect that you couldn't write ROW (though you should be able to, since the standard says this is a row constructor), and the rather larger defect that unlike other uses of row constructors, we would not expand a "foo.*" item into multiple columns. Fix that by changing the RHS to be just "a_expr" in the grammar, leaving it to transformMultiAssignRef to separate the elements of a RowExpr; which it will do only after performing standard transformation of the RowExpr, so that "foo.*" behaves as expected. The key reason we didn't do that before was the hard-wired handling of DEFAULT tokens (SetToDefault nodes). This patch deals with that issue by allowing DEFAULT in any a_expr and having parse analysis throw an error if SetToDefault is found in an unexpected place. That's an improvement anyway since the error can be more specific than just "syntax error". The SQL standard suggests that the RHS could be any a_expr yielding a suitable row value. This patch doesn't really move the goal posts in that respect --- you're still limited to RowExpr or a sub-SELECT --- but it does fix the grammar restriction, so it provides some tangible progress towards a full implementation. And the limitation is now documented by an explicit error message rather than an unhelpful "syntax error". Discussion: <8542.1479742008@sss.pgh.pa.us>
2016-11-22 21:19:57 +01:00
/*
* Do basic expression transformation (same as a ROW() expr, but here
* we disallow SetToDefault)
*/
sublist = transformExpressionList(pstate, sublist,
EXPR_KIND_VALUES, false);
/*
* All the sublists must be the same length, *after* transformation
2006-10-04 02:30:14 +02:00
* (which might expand '*' into multiple items). The VALUES RTE can't
* handle anything different.
*/
if (sublist_length < 0)
{
/* Remember post-transformation length of first sublist */
sublist_length = list_length(sublist);
/* and allocate array for per-column lists */
colexprs = (List **) palloc0(sublist_length * sizeof(List *));
}
else if (sublist_length != list_length(sublist))
{
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("VALUES lists must all be the same length"),
parser_errposition(pstate,
exprLocation((Node *) sublist))));
}
Improve handling of "UPDATE ... SET (column_list) = row_constructor". Previously, the right-hand side of a multiple-column assignment, if it wasn't a sub-SELECT, had to be a simple parenthesized expression list, because gram.y was responsible for "bursting" the construct into independent column assignments. This had the minor defect that you couldn't write ROW (though you should be able to, since the standard says this is a row constructor), and the rather larger defect that unlike other uses of row constructors, we would not expand a "foo.*" item into multiple columns. Fix that by changing the RHS to be just "a_expr" in the grammar, leaving it to transformMultiAssignRef to separate the elements of a RowExpr; which it will do only after performing standard transformation of the RowExpr, so that "foo.*" behaves as expected. The key reason we didn't do that before was the hard-wired handling of DEFAULT tokens (SetToDefault nodes). This patch deals with that issue by allowing DEFAULT in any a_expr and having parse analysis throw an error if SetToDefault is found in an unexpected place. That's an improvement anyway since the error can be more specific than just "syntax error". The SQL standard suggests that the RHS could be any a_expr yielding a suitable row value. This patch doesn't really move the goal posts in that respect --- you're still limited to RowExpr or a sub-SELECT --- but it does fix the grammar restriction, so it provides some tangible progress towards a full implementation. And the limitation is now documented by an explicit error message rather than an unhelpful "syntax error". Discussion: <8542.1479742008@sss.pgh.pa.us>
2016-11-22 21:19:57 +01:00
/* Build per-column expression lists */
i = 0;
foreach(lc2, sublist)
{
2006-10-04 02:30:14 +02:00
Node *col = (Node *) lfirst(lc2);
colexprs[i] = lappend(colexprs[i], col);
i++;
}
/* Release sub-list's cells to save memory */
list_free(sublist);
}
/*
2006-10-04 02:30:14 +02:00
* Now resolve the common types of the columns, and coerce everything to
Fix reporting of column typmods for multi-row VALUES constructs. expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. It's cheap for transformValuesClause to determine the common typmod while transforming a multi-row VALUES, but it'd be less cheap for expandRTE() and get_rte_attribute_type() to re-determine that info every time they're asked --- possibly a lot less cheap, if the VALUES has many rows. Therefore, the best fix is to record the common typmods explicitly in a list in the VALUES RTE, as we were already doing for column collations. This looks quite a bit like what we're doing for CTE RTEs, so we can save a little bit of space and code by unifying the representation for those two RTE types. They both now share coltypes/coltypmods/colcollations fields. (At some point it might seem desirable to populate those fields for all RTE types; but right now it looks like constructing them for other RTE types would add more code and cycles than it would save.) The RTE change requires a catversion bump, so this fix is only usable in HEAD. If we fix this at all in the back branches, the patch will need to look quite different. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
* those types. Then identify the common typmod and common collation, if
* any, of each column.
*
* We must do collation processing now because (1) assign_query_collations
* doesn't process rangetable entries, and (2) we need to label the VALUES
* RTE with column collations for use in the outer query. We don't
* consider conflict of implicit collations to be an error here; instead
2011-06-09 20:32:50 +02:00
* the column will just show InvalidOid as its collation, and you'll get a
* failure later if that results in failure to resolve a collation.
*
* Note we modify the per-column expression lists in-place.
*/
for (i = 0; i < sublist_length; i++)
{
2011-06-09 20:32:50 +02:00
Oid coltype;
Fix reporting of column typmods for multi-row VALUES constructs. expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. It's cheap for transformValuesClause to determine the common typmod while transforming a multi-row VALUES, but it'd be less cheap for expandRTE() and get_rte_attribute_type() to re-determine that info every time they're asked --- possibly a lot less cheap, if the VALUES has many rows. Therefore, the best fix is to record the common typmods explicitly in a list in the VALUES RTE, as we were already doing for column collations. This looks quite a bit like what we're doing for CTE RTEs, so we can save a little bit of space and code by unifying the representation for those two RTE types. They both now share coltypes/coltypmods/colcollations fields. (At some point it might seem desirable to populate those fields for all RTE types; but right now it looks like constructing them for other RTE types would add more code and cycles than it would save.) The RTE change requires a catversion bump, so this fix is only usable in HEAD. If we fix this at all in the back branches, the patch will need to look quite different. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
int32 coltypmod = -1;
2011-06-09 20:32:50 +02:00
Oid colcoll;
Fix reporting of column typmods for multi-row VALUES constructs. expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. It's cheap for transformValuesClause to determine the common typmod while transforming a multi-row VALUES, but it'd be less cheap for expandRTE() and get_rte_attribute_type() to re-determine that info every time they're asked --- possibly a lot less cheap, if the VALUES has many rows. Therefore, the best fix is to record the common typmods explicitly in a list in the VALUES RTE, as we were already doing for column collations. This looks quite a bit like what we're doing for CTE RTEs, so we can save a little bit of space and code by unifying the representation for those two RTE types. They both now share coltypes/coltypmods/colcollations fields. (At some point it might seem desirable to populate those fields for all RTE types; but right now it looks like constructing them for other RTE types would add more code and cycles than it would save.) The RTE change requires a catversion bump, so this fix is only usable in HEAD. If we fix this at all in the back branches, the patch will need to look quite different. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
bool first = true;
coltype = select_common_type(pstate, colexprs[i], "VALUES", NULL);
foreach(lc, colexprs[i])
{
Node *col = (Node *) lfirst(lc);
col = coerce_to_common_type(pstate, col, coltype, "VALUES");
lfirst(lc) = (void *) col;
Fix reporting of column typmods for multi-row VALUES constructs. expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. It's cheap for transformValuesClause to determine the common typmod while transforming a multi-row VALUES, but it'd be less cheap for expandRTE() and get_rte_attribute_type() to re-determine that info every time they're asked --- possibly a lot less cheap, if the VALUES has many rows. Therefore, the best fix is to record the common typmods explicitly in a list in the VALUES RTE, as we were already doing for column collations. This looks quite a bit like what we're doing for CTE RTEs, so we can save a little bit of space and code by unifying the representation for those two RTE types. They both now share coltypes/coltypmods/colcollations fields. (At some point it might seem desirable to populate those fields for all RTE types; but right now it looks like constructing them for other RTE types would add more code and cycles than it would save.) The RTE change requires a catversion bump, so this fix is only usable in HEAD. If we fix this at all in the back branches, the patch will need to look quite different. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
if (first)
{
coltypmod = exprTypmod(col);
first = false;
}
else
{
/* As soon as we see a non-matching typmod, fall back to -1 */
if (coltypmod >= 0 && coltypmod != exprTypmod(col))
coltypmod = -1;
}
}
colcoll = select_common_collation(pstate, colexprs[i], true);
Fix reporting of column typmods for multi-row VALUES constructs. expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. It's cheap for transformValuesClause to determine the common typmod while transforming a multi-row VALUES, but it'd be less cheap for expandRTE() and get_rte_attribute_type() to re-determine that info every time they're asked --- possibly a lot less cheap, if the VALUES has many rows. Therefore, the best fix is to record the common typmods explicitly in a list in the VALUES RTE, as we were already doing for column collations. This looks quite a bit like what we're doing for CTE RTEs, so we can save a little bit of space and code by unifying the representation for those two RTE types. They both now share coltypes/coltypmods/colcollations fields. (At some point it might seem desirable to populate those fields for all RTE types; but right now it looks like constructing them for other RTE types would add more code and cycles than it would save.) The RTE change requires a catversion bump, so this fix is only usable in HEAD. If we fix this at all in the back branches, the patch will need to look quite different. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
coltypes = lappend_oid(coltypes, coltype);
coltypmods = lappend_int(coltypmods, coltypmod);
colcollations = lappend_oid(colcollations, colcoll);
}
/*
* Finally, rearrange the coerced expressions into row-organized lists.
*/
exprsLists = NIL;
foreach(lc, colexprs[0])
{
Node *col = (Node *) lfirst(lc);
List *sublist;
sublist = list_make1(col);
exprsLists = lappend(exprsLists, sublist);
}
list_free(colexprs[0]);
for (i = 1; i < sublist_length; i++)
{
forboth(lc, colexprs[i], lc2, exprsLists)
{
Node *col = (Node *) lfirst(lc);
List *sublist = lfirst(lc2);
/* sublist pointer in exprsLists won't need adjustment */
(void) lappend(sublist, col);
}
list_free(colexprs[i]);
}
/*
* Ordinarily there can't be any current-level Vars in the expression
* lists, because the namespace was empty ... but if we're inside CREATE
* RULE, then NEW/OLD references might appear. In that case we have to
* mark the VALUES RTE as LATERAL.
*/
if (pstate->p_rtable != NIL &&
contain_vars_of_level((Node *) exprsLists, 0))
lateral = true;
/*
* Generate the VALUES RTE
*/
Fix reporting of column typmods for multi-row VALUES constructs. expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. It's cheap for transformValuesClause to determine the common typmod while transforming a multi-row VALUES, but it'd be less cheap for expandRTE() and get_rte_attribute_type() to re-determine that info every time they're asked --- possibly a lot less cheap, if the VALUES has many rows. Therefore, the best fix is to record the common typmods explicitly in a list in the VALUES RTE, as we were already doing for column collations. This looks quite a bit like what we're doing for CTE RTEs, so we can save a little bit of space and code by unifying the representation for those two RTE types. They both now share coltypes/coltypmods/colcollations fields. (At some point it might seem desirable to populate those fields for all RTE types; but right now it looks like constructing them for other RTE types would add more code and cycles than it would save.) The RTE change requires a catversion bump, so this fix is only usable in HEAD. If we fix this at all in the back branches, the patch will need to look quite different. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
rte = addRangeTableEntryForValues(pstate, exprsLists,
coltypes, coltypmods, colcollations,
NULL, lateral, true);
addRTEtoQuery(pstate, rte, true, true, true);
/* grab the namespace item made by addRTEtoQuery */
nsitem = (ParseNamespaceItem *) llast(pstate->p_namespace);
Assert(rte == nsitem->p_rte);
/*
* Generate a targetlist as though expanding "*"
*/
Assert(pstate->p_next_resno == 1);
qry->targetList = expandNSItemAttrs(pstate, nsitem, 0, -1);
/*
2006-10-04 02:30:14 +02:00
* The grammar allows attaching ORDER BY, LIMIT, and FOR UPDATE to a
* VALUES, so cope.
*/
qry->sortClause = transformSortClause(pstate,
stmt->sortClause,
&qry->targetList,
EXPR_KIND_ORDER_BY,
2010-02-26 03:01:40 +01:00
false /* allow SQL92 rules */ );
qry->limitOffset = transformLimitClause(pstate, stmt->limitOffset,
EXPR_KIND_OFFSET, "OFFSET");
qry->limitCount = transformLimitClause(pstate, stmt->limitCount,
EXPR_KIND_LIMIT, "LIMIT");
if (stmt->lockingClause)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s cannot be applied to VALUES",
LCS_asString(((LockingClause *)
linitial(stmt->lockingClause))->strength))));
qry->rtable = pstate->p_rtable;
qry->jointree = makeFromExpr(pstate->p_joinlist, NULL);
qry->hasSubLinks = pstate->p_hasSubLinks;
assign_query_collations(pstate, qry);
return qry;
}
/*
* transformSetOperationStmt -
* transforms a set-operations tree
*
* A set-operation tree is just a SELECT, but with UNION/INTERSECT/EXCEPT
* structure to it. We must transform each leaf SELECT and build up a top-
* level Query that contains the leaf SELECTs as subqueries in its rangetable.
* The tree of set operations is converted into the setOperations field of
* the top-level Query.
*/
static Query *
transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
{
Query *qry = makeNode(Query);
SelectStmt *leftmostSelect;
int leftmostRTI;
Query *leftmostQuery;
SetOperationStmt *sostmt;
List *sortClause;
Node *limitOffset;
Node *limitCount;
List *lockingClause;
WithClause *withClause;
Node *node;
ListCell *left_tlist,
*lct,
*lcm,
*lcc,
*l;
List *targetvars,
*targetnames,
*sv_namespace;
int sv_rtable_length;
RangeTblEntry *jrte;
int tllen;
qry->commandType = CMD_SELECT;
/*
* Find leftmost leaf SelectStmt. We currently only need to do this in
* order to deliver a suitable error message if there's an INTO clause
* there, implying the set-op tree is in a context that doesn't allow
* INTO. (transformSetOperationTree would throw error anyway, but it
* seems worth the trouble to throw a different error for non-leftmost
* INTO, so we produce that error in transformSetOperationTree.)
*/
leftmostSelect = stmt->larg;
while (leftmostSelect && leftmostSelect->op != SETOP_NONE)
leftmostSelect = leftmostSelect->larg;
Assert(leftmostSelect && IsA(leftmostSelect, SelectStmt) &&
leftmostSelect->larg == NULL);
if (leftmostSelect->intoClause)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("SELECT ... INTO is not allowed here"),
parser_errposition(pstate,
exprLocation((Node *) leftmostSelect->intoClause))));
/*
* We need to extract ORDER BY and other top-level clauses here and not
* let transformSetOperationTree() see them --- else it'll just recurse
* right back here!
*/
sortClause = stmt->sortClause;
limitOffset = stmt->limitOffset;
limitCount = stmt->limitCount;
lockingClause = stmt->lockingClause;
withClause = stmt->withClause;
stmt->sortClause = NIL;
stmt->limitOffset = NULL;
stmt->limitCount = NULL;
stmt->lockingClause = NIL;
stmt->withClause = NULL;
/* We don't support FOR UPDATE/SHARE with set ops at the moment. */
if (lockingClause)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s is not allowed with UNION/INTERSECT/EXCEPT",
LCS_asString(((LockingClause *)
linitial(lockingClause))->strength))));
Hi! INTERSECT and EXCEPT is available for postgresql-v6.4! The patch against v6.4 is included at the end of the current text (in uuencoded form!) I also included the text of my Master's Thesis. (a postscript version). I hope that you find something of it useful and would be happy if parts of it find their way into the PostgreSQL documentation project (If so, tell me, then I send the sources of the document!) The contents of the document are: -) The first chapter might be of less interest as it gives only an overview on SQL. -) The second chapter gives a description on much of PostgreSQL's features (like user defined types etc. and how to use these features) -) The third chapter starts with an overview of PostgreSQL's internal structure with focus on the stages a query has to pass (i.e. parser, planner/optimizer, executor). Then a detailed description of the implementation of the Having clause and the Intersect/Except logic is given. Originally I worked on v6.3.2 but never found time enough to prepare and post a patch. Now I applied the changes to v6.4 to get Intersect and Except working with the new version. Chapter 3 of my documentation deals with the changes against v6.3.2, so keep that in mind when comparing the parts of the code printed there with the patched sources of v6.4. Here are some remarks on the patch. There are some things that have still to be done but at the moment I don't have time to do them myself. (I'm doing my military service at the moment) Sorry for that :-( -) I used a rewrite technique for the implementation of the Except/Intersect logic which rewrites the query to a semantically equivalent query before it is handed to the rewrite system (for views, rules etc.), planner, executor etc. -) In v6.3.2 the types of the attributes of two select statements connected by the UNION keyword had to match 100%. In v6.4 the types only need to be familiar (i.e. int and float can be mixed). Since this feature did not exist when I worked on Intersect/Except it does not work correctly for Except/Intersect queries WHEN USED IN COMBINATION WITH UNIONS! (i.e. sometimes the wrong type is used for the resulting table. This is because until now the types of the attributes of the first select statement have been used for the resulting table. When Intersects and/or Excepts are used in combination with Unions it might happen, that the first select statement of the original query appears at another position in the query which will be executed. The reason for this is the technique used for the implementation of Except/Intersect which does a query rewrite!) NOTE: It is NOT broken for pure UNION queries and pure INTERSECT/EXCEPT queries!!! -) I had to add the field intersect_clause to some data structures but did not find time to implement printfuncs for the new field. This does NOT break the debug modes but when an Except/Intersect is used the query debug output will be the already rewritten query. -) Massive changes to the grammar rules for SELECT and INSERT statements have been necessary (see comments in gram.y and documentation for deatails) in order to be able to use mixed queries like (SELECT ... UNION (SELECT ... EXCEPT SELECT)) INTERSECT SELECT...; -) When using UNION/EXCEPT/INTERSECT you will get: NOTICE: equal: "Don't know if nodes of type xxx are equal". I did not have time to add comparsion support for all the needed nodes, but the default behaviour of the function equal met my requirements. I did not dare to supress this message! That's the reason why the regression test for union will fail: These messages are also included in the union.out file! -) Somebody of you changed the union_planner() function for v6.4 (I copied the targetlist to new_tlist and that was removed and replaced by a cleanup of the original targetlist). These chnages violated some having queries executed against views so I changed it back again. I did not have time to examine the differences between the two versions but now it works :-) If you want to find out, try the file queries/view_having.sql on both versions and compare the results . Two queries won't produce a correct result with your version. regards Stefan
1999-01-18 01:10:17 +01:00
/* Process the WITH clause independently of all else */
if (withClause)
{
qry->hasRecursive = withClause->recursive;
qry->cteList = transformWithClause(pstate, withClause);
qry->hasModifyingCTE = pstate->p_hasModifyingCTE;
}
1999-05-25 18:15:34 +02:00
/*
* Recursively transform the components of the tree.
1999-05-25 18:15:34 +02:00
*/
2017-02-21 17:33:07 +01:00
sostmt = castNode(SetOperationStmt,
transformSetOperationTree(pstate, stmt, true, NULL));
2017-02-21 17:33:07 +01:00
Assert(sostmt);
qry->setOperations = (Node *) sostmt;
/*
* Re-find leftmost SELECT (now it's a sub-query in rangetable)
*/
node = sostmt->larg;
while (node && IsA(node, SetOperationStmt))
node = ((SetOperationStmt *) node)->larg;
Assert(node && IsA(node, RangeTblRef));
leftmostRTI = ((RangeTblRef *) node)->rtindex;
leftmostQuery = rt_fetch(leftmostRTI, pstate->p_rtable)->subquery;
Assert(leftmostQuery != NULL);
2001-03-22 05:01:46 +01:00
/*
* Generate dummy targetlist for outer query using column names of
* leftmost select and common datatypes/collations of topmost set
* operation. Also make lists of the dummy vars and their names for use
* in parsing ORDER BY.
*
* Note: we use leftmostRTI as the varno of the dummy variables. It
* shouldn't matter too much which RT index they have, as long as they
* have one that corresponds to a real RT entry; else funny things may
* happen when the tree is mashed by rule rewriting.
*/
qry->targetList = NIL;
targetvars = NIL;
targetnames = NIL;
forfour(lct, sostmt->colTypes,
lcm, sostmt->colTypmods,
lcc, sostmt->colCollations,
left_tlist, leftmostQuery->targetList)
{
Oid colType = lfirst_oid(lct);
int32 colTypmod = lfirst_int(lcm);
Oid colCollation = lfirst_oid(lcc);
TargetEntry *lefttle = (TargetEntry *) lfirst(left_tlist);
char *colName;
TargetEntry *tle;
Var *var;
Assert(!lefttle->resjunk);
colName = pstrdup(lefttle->resname);
var = makeVar(leftmostRTI,
lefttle->resno,
colType,
colTypmod,
colCollation,
0);
var->location = exprLocation((Node *) lefttle->expr);
tle = makeTargetEntry((Expr *) var,
(AttrNumber) pstate->p_next_resno++,
colName,
false);
qry->targetList = lappend(qry->targetList, tle);
targetvars = lappend(targetvars, var);
targetnames = lappend(targetnames, makeString(colName));
}
2001-03-22 05:01:46 +01:00
/*
2005-10-15 04:49:52 +02:00
* As a first step towards supporting sort clauses that are expressions
* using the output columns, generate a namespace entry that makes the
* output columns visible. A Join RTE node is handy for this, since we
2005-10-15 04:49:52 +02:00
* can easily control the Vars generated upon matches.
*
* Note: we don't yet do anything useful with such cases, but at least
* "ORDER BY upper(foo)" will draw the right error message rather than
* "foo not found".
*/
sv_rtable_length = list_length(pstate->p_rtable);
jrte = addRangeTableEntryForJoin(pstate,
targetnames,
JOIN_INNER,
targetvars,
NULL,
false);
sv_namespace = pstate->p_namespace;
pstate->p_namespace = NIL;
/* add jrte to column namespace only */
addRTEtoQuery(pstate, jrte, false, false, true);
/*
* For now, we don't support resjunk sort clauses on the output of a
* setOperation tree --- you can only use the SQL92-spec options of
2005-10-15 04:49:52 +02:00
* selecting an output column by name or number. Enforce by checking that
* transformSortClause doesn't add any items to tlist.
*/
tllen = list_length(qry->targetList);
qry->sortClause = transformSortClause(pstate,
sortClause,
&qry->targetList,
EXPR_KIND_ORDER_BY,
2010-02-26 03:01:40 +01:00
false /* allow SQL92 rules */ );
/* restore namespace, remove jrte from rtable */
pstate->p_namespace = sv_namespace;
pstate->p_rtable = list_truncate(pstate->p_rtable, sv_rtable_length);
if (tllen != list_length(qry->targetList))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("invalid UNION/INTERSECT/EXCEPT ORDER BY clause"),
errdetail("Only result column names can be used, not expressions or functions."),
errhint("Add the expression/function to every SELECT, or move the UNION into a FROM clause."),
parser_errposition(pstate,
exprLocation(list_nth(qry->targetList, tllen)))));
qry->limitOffset = transformLimitClause(pstate, limitOffset,
EXPR_KIND_OFFSET, "OFFSET");
qry->limitCount = transformLimitClause(pstate, limitCount,
EXPR_KIND_LIMIT, "LIMIT");
qry->rtable = pstate->p_rtable;
qry->jointree = makeFromExpr(pstate->p_joinlist, NULL);
qry->hasSubLinks = pstate->p_hasSubLinks;
qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
Improve parser's and planner's handling of set-returning functions. Teach the parser to reject misplaced set-returning functions during parse analysis using p_expr_kind, in much the same way as we do for aggregates and window functions (cf commit eaccfded9). While this isn't complete (it misses nesting-based restrictions), it's much better than the previous error reporting for such cases, and it allows elimination of assorted ad-hoc expression_returns_set() error checks. We could add nesting checks later if it seems important to catch all cases at parse time. There is one case the parser will now throw error for although previous versions allowed it, which is SRFs in the tlist of an UPDATE. That never behaved sensibly (since it's ill-defined which generated row should be used to perform the update) and it's hard to see why it should not be treated as an error. It's a release-note-worthy change though. Also, add a new Query field hasTargetSRFs reporting whether there are any SRFs in the targetlist (including GROUP BY/ORDER BY expressions). The parser can now set that basically for free during parse analysis, and we can use it in a number of places to avoid expression_returns_set searches. (There will be more such checks soon.) In some places, this allows decontorting the logic since it's no longer expensive to check for SRFs in the tlist --- so I made the checks parallel to the handling of hasAggs/hasWindowFuncs wherever it seemed appropriate. catversion bump because adding a Query field changes stored rules. Andres Freund and Tom Lane Discussion: <24639.1473782855@sss.pgh.pa.us>
2016-09-13 19:54:24 +02:00
qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasAggs = pstate->p_hasAggs;
foreach(l, lockingClause)
{
transformLockingClause(pstate, qry,
(LockingClause *) lfirst(l), false);
}
1999-01-21 17:08:55 +01:00
assign_query_collations(pstate, qry);
/* this must be done after collations, for reliable comparison of exprs */
if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
parseCheckAggregates(pstate, qry);
return qry;
}
/*
* transformSetOperationTree
* Recursively transform leaves and internal nodes of a set-op tree
*
* In addition to returning the transformed node, if targetlist isn't NULL
* then we return a list of its non-resjunk TargetEntry nodes. For a leaf
* set-op node these are the actual targetlist entries; otherwise they are
* dummy entries created to carry the type, typmod, collation, and location
* (for error messages) of each output column of the set-op node. This info
* is needed only during the internal recursion of this function, so outside
* callers pass NULL for targetlist. Note: the reason for passing the
* actual targetlist entries of a leaf node is so that upper levels can
* replace UNKNOWN Consts with properly-coerced constants.
*/
static Node *
transformSetOperationTree(ParseState *pstate, SelectStmt *stmt,
bool isTopLevel, List **targetlist)
{
2001-03-22 05:01:46 +01:00
bool isLeaf;
Assert(stmt && IsA(stmt, SelectStmt));
/* Guard against stack overflow due to overly complex set-expressions */
check_stack_depth();
/*
* Validity-check both leaf and internal SELECTs for disallowed ops.
*/
if (stmt->intoClause)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("INTO is only allowed on first SELECT of UNION/INTERSECT/EXCEPT"),
parser_errposition(pstate,
exprLocation((Node *) stmt->intoClause))));
/* We don't support FOR UPDATE/SHARE with set ops at the moment. */
if (stmt->lockingClause)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s is not allowed with UNION/INTERSECT/EXCEPT",
LCS_asString(((LockingClause *)
linitial(stmt->lockingClause))->strength))));
/*
* If an internal node of a set-op tree has ORDER BY, LIMIT, FOR UPDATE,
* or WITH clauses attached, we need to treat it like a leaf node to
* generate an independent sub-Query tree. Otherwise, it can be
* represented by a SetOperationStmt node underneath the parent Query.
*/
if (stmt->op == SETOP_NONE)
{
Assert(stmt->larg == NULL && stmt->rarg == NULL);
isLeaf = true;
}
else
{
Assert(stmt->larg != NULL && stmt->rarg != NULL);
if (stmt->sortClause || stmt->limitOffset || stmt->limitCount ||
stmt->lockingClause || stmt->withClause)
isLeaf = true;
else
isLeaf = false;
}
if (isLeaf)
{
/* Process leaf SELECT */
2001-03-22 05:01:46 +01:00
Query *selectQuery;
char selectName[32];
RangeTblEntry *rte PG_USED_FOR_ASSERTS_ONLY;
RangeTblRef *rtr;
ListCell *tl;
/*
* Transform SelectStmt into a Query.
*
Change unknown-type literals to type text in SELECT and RETURNING lists. Previously, we left such literals alone if the query or subquery had no properties forcing a type decision to be made (such as an ORDER BY or DISTINCT clause using that output column). This meant that "unknown" could be an exposed output column type, which has never been a great idea because it could result in strange failures later on. For example, an outer query that tried to do any operations on an unknown-type subquery output would generally fail with some weird error like "failed to find conversion function from unknown to text" or "could not determine which collation to use for string comparison". Also, if the case occurred in a CREATE VIEW's query then the view would have an unknown-type column, causing similar failures in queries trying to use the view. To fix, at the tail end of parse analysis of a query, forcibly convert any remaining "unknown" literals in its SELECT or RETURNING list to type text. However, provide a switch to suppress that, and use it in the cases of SELECT inside a set operation or INSERT command. In those cases we already had type resolution rules that make use of context information from outside the subquery proper, and we don't want to change that behavior. Also, change creation of an unknown-type column in a relation from a warning to a hard error. The error should be unreachable now in CREATE VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown" in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because it's nothing but a foot-gun. This change creates a pg_upgrade failure case: a matview that contains an unknown-type column can't be pg_upgraded, because reparsing the matview's defining query will now decide that the column is of type text, which doesn't match the cstring-like storage that the old materialized column would actually have. Add a checking pass to detect that. While at it, we can detect tables or composite types that would fail, essentially for free. Those would fail safely anyway later on, but we might as well fail earlier. This patch is by me, but it owes something to previous investigations by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for review. Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
2017-01-25 15:17:18 +01:00
* This works the same as SELECT transformation normally would, except
* that we prevent resolving unknown-type outputs as TEXT. This does
* not change the subquery's semantics since if the column type
* matters semantically, it would have been resolved to something else
* anyway. Doing this lets us resolve such outputs using
* select_common_type(), below.
*
* Note: previously transformed sub-queries don't affect the parsing
* of this sub-query, because they are not in the toplevel pstate's
2005-10-15 04:49:52 +02:00
* namespace list.
*/
Change unknown-type literals to type text in SELECT and RETURNING lists. Previously, we left such literals alone if the query or subquery had no properties forcing a type decision to be made (such as an ORDER BY or DISTINCT clause using that output column). This meant that "unknown" could be an exposed output column type, which has never been a great idea because it could result in strange failures later on. For example, an outer query that tried to do any operations on an unknown-type subquery output would generally fail with some weird error like "failed to find conversion function from unknown to text" or "could not determine which collation to use for string comparison". Also, if the case occurred in a CREATE VIEW's query then the view would have an unknown-type column, causing similar failures in queries trying to use the view. To fix, at the tail end of parse analysis of a query, forcibly convert any remaining "unknown" literals in its SELECT or RETURNING list to type text. However, provide a switch to suppress that, and use it in the cases of SELECT inside a set operation or INSERT command. In those cases we already had type resolution rules that make use of context information from outside the subquery proper, and we don't want to change that behavior. Also, change creation of an unknown-type column in a relation from a warning to a hard error. The error should be unreachable now in CREATE VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown" in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because it's nothing but a foot-gun. This change creates a pg_upgrade failure case: a matview that contains an unknown-type column can't be pg_upgraded, because reparsing the matview's defining query will now decide that the column is of type text, which doesn't match the cstring-like storage that the old materialized column would actually have. Add a checking pass to detect that. While at it, we can detect tables or composite types that would fail, essentially for free. Those would fail safely anyway later on, but we might as well fail earlier. This patch is by me, but it owes something to previous investigations by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for review. Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
2017-01-25 15:17:18 +01:00
selectQuery = parse_sub_analyze((Node *) stmt, pstate,
NULL, false, false);
/*
2005-10-15 04:49:52 +02:00
* Check for bogus references to Vars on the current query level (but
* upper-level references are okay). Normally this can't happen
* because the namespace will be empty, but it could happen if we are
* inside a rule.
*/
if (pstate->p_namespace)
{
if (contain_vars_of_level((Node *) selectQuery, 1))
ereport(ERROR,
(errcode(ERRCODE_INVALID_COLUMN_REFERENCE),
errmsg("UNION/INTERSECT/EXCEPT member statement cannot refer to other relations of same query level"),
parser_errposition(pstate,
locate_var_of_level((Node *) selectQuery, 1))));
}
2001-03-22 05:01:46 +01:00
/*
* Extract a list of the non-junk TLEs for upper-level processing.
*/
if (targetlist)
{
*targetlist = NIL;
foreach(tl, selectQuery->targetList)
{
TargetEntry *tle = (TargetEntry *) lfirst(tl);
if (!tle->resjunk)
*targetlist = lappend(*targetlist, tle);
}
}
/*
* Make the leaf query be a subquery in the top-level rangetable.
*/
snprintf(selectName, sizeof(selectName), "*SELECT* %d",
list_length(pstate->p_rtable) + 1);
rte = addRangeTableEntryForSubquery(pstate,
selectQuery,
makeAlias(selectName, NIL),
false,
false);
2001-03-22 05:01:46 +01:00
/*
2005-10-15 04:49:52 +02:00
* Return a RangeTblRef to replace the SelectStmt in the set-op tree.
*/
rtr = makeNode(RangeTblRef);
/* assume new rte is at end */
rtr->rtindex = list_length(pstate->p_rtable);
Assert(rte == rt_fetch(rtr->rtindex, pstate->p_rtable));
return (Node *) rtr;
}
else
{
/* Process an internal node (set operation node) */
SetOperationStmt *op = makeNode(SetOperationStmt);
List *ltargetlist;
List *rtargetlist;
ListCell *ltl;
ListCell *rtl;
const char *context;
context = (stmt->op == SETOP_UNION ? "UNION" :
(stmt->op == SETOP_INTERSECT ? "INTERSECT" :
"EXCEPT"));
op->op = stmt->op;
op->all = stmt->all;
/*
* Recursively transform the left child node.
*/
op->larg = transformSetOperationTree(pstate, stmt->larg,
false,
&ltargetlist);
/*
2010-02-26 03:01:40 +01:00
* If we are processing a recursive union query, now is the time to
* examine the non-recursive term's output columns and mark the
* containing CTE as having those result columns. We should do this
* only at the topmost setop of the CTE, of course.
*/
if (isTopLevel &&
pstate->p_parent_cte &&
pstate->p_parent_cte->cterecursive)
determineRecursiveColTypes(pstate, op->larg, ltargetlist);
/*
* Recursively transform the right child node.
*/
op->rarg = transformSetOperationTree(pstate, stmt->rarg,
false,
&rtargetlist);
2001-03-22 05:01:46 +01:00
/*
* Verify that the two children have the same number of non-junk
* columns, and determine the types of the merged output columns.
*/
if (list_length(ltargetlist) != list_length(rtargetlist))
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("each %s query must have the same number of columns",
context),
parser_errposition(pstate,
exprLocation((Node *) rtargetlist))));
if (targetlist)
*targetlist = NIL;
op->colTypes = NIL;
op->colTypmods = NIL;
op->colCollations = NIL;
op->groupClauses = NIL;
forboth(ltl, ltargetlist, rtl, rtargetlist)
{
TargetEntry *ltle = (TargetEntry *) lfirst(ltl);
TargetEntry *rtle = (TargetEntry *) lfirst(rtl);
Node *lcolnode = (Node *) ltle->expr;
Node *rcolnode = (Node *) rtle->expr;
Oid lcoltype = exprType(lcolnode);
Oid rcoltype = exprType(rcolnode);
int32 lcoltypmod = exprTypmod(lcolnode);
int32 rcoltypmod = exprTypmod(rcolnode);
Node *bestexpr;
int bestlocation;
2001-03-22 05:01:46 +01:00
Oid rescoltype;
int32 rescoltypmod;
Oid rescolcoll;
/* select common type, same as CASE et al */
rescoltype = select_common_type(pstate,
list_make2(lcolnode, rcolnode),
context,
&bestexpr);
bestlocation = exprLocation(bestexpr);
/* if same type and same typmod, use typmod; else default */
if (lcoltype == rcoltype && lcoltypmod == rcoltypmod)
rescoltypmod = lcoltypmod;
else
rescoltypmod = -1;
/*
* Verify the coercions are actually possible. If not, we'd fail
2010-02-26 03:01:40 +01:00
* later anyway, but we want to fail now while we have sufficient
* context to produce an error cursor position.
*
* For all non-UNKNOWN-type cases, we verify coercibility but we
* don't modify the child's expression, for fear of changing the
* child query's semantics.
*
* If a child expression is an UNKNOWN-type Const or Param, we
* want to replace it with the coerced expression. This can only
* happen when the child is a leaf set-op node. It's safe to
* replace the expression because if the child query's semantics
* depended on the type of this output column, it'd have already
* coerced the UNKNOWN to something else. We want to do this
* because (a) we want to verify that a Const is valid for the
* target type, or resolve the actual type of an UNKNOWN Param,
* and (b) we want to avoid unnecessary discrepancies between the
* output type of the child query and the resolved target type.
* Such a discrepancy would disable optimization in the planner.
*
* If it's some other UNKNOWN-type node, eg a Var, we do nothing
* (knowing that coerce_to_common_type would fail). The planner
* is sometimes able to fold an UNKNOWN Var to a constant before
* it has to coerce the type, so failing now would just break
* cases that might work.
*/
if (lcoltype != UNKNOWNOID)
lcolnode = coerce_to_common_type(pstate, lcolnode,
rescoltype, context);
else if (IsA(lcolnode, Const) ||
IsA(lcolnode, Param))
{
lcolnode = coerce_to_common_type(pstate, lcolnode,
rescoltype, context);
ltle->expr = (Expr *) lcolnode;
}
if (rcoltype != UNKNOWNOID)
rcolnode = coerce_to_common_type(pstate, rcolnode,
rescoltype, context);
else if (IsA(rcolnode, Const) ||
IsA(rcolnode, Param))
{
rcolnode = coerce_to_common_type(pstate, rcolnode,
rescoltype, context);
rtle->expr = (Expr *) rcolnode;
}
/*
* Select common collation. A common collation is required for
* all set operators except UNION ALL; see SQL:2008 7.13 <query
* expression> Syntax Rule 15c. (If we fail to identify a common
* collation for a UNION ALL column, the colCollations element
* will be set to InvalidOid, which may result in a runtime error
* if something at a higher query level wants to use the column's
* collation.)
*/
rescolcoll = select_common_collation(pstate,
list_make2(lcolnode, rcolnode),
(op->op == SETOP_UNION && op->all));
/* emit results */
op->colTypes = lappend_oid(op->colTypes, rescoltype);
op->colTypmods = lappend_int(op->colTypmods, rescoltypmod);
op->colCollations = lappend_oid(op->colCollations, rescolcoll);
/*
* For all cases except UNION ALL, identify the grouping operators
* (and, if available, sorting operators) that will be used to
* eliminate duplicates.
*/
if (op->op != SETOP_UNION || !op->all)
{
SortGroupClause *grpcl = makeNode(SortGroupClause);
Oid sortop;
Oid eqop;
bool hashable;
ParseCallbackState pcbstate;
setup_parser_errposition_callback(&pcbstate, pstate,
bestlocation);
/* determine the eqop and optional sortop */
get_sort_group_operators(rescoltype,
false, true, false,
&sortop, &eqop, NULL,
&hashable);
cancel_parser_errposition_callback(&pcbstate);
/* we don't have a tlist yet, so can't assign sortgrouprefs */
grpcl->tleSortGroupRef = 0;
grpcl->eqop = eqop;
grpcl->sortop = sortop;
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
grpcl->nulls_first = false; /* OK with or without sortop */
grpcl->hashable = hashable;
op->groupClauses = lappend(op->groupClauses, grpcl);
}
/*
* Construct a dummy tlist entry to return. We use a SetToDefault
* node for the expression, since it carries exactly the fields
* needed, but any other expression node type would do as well.
*/
if (targetlist)
{
SetToDefault *rescolnode = makeNode(SetToDefault);
TargetEntry *restle;
rescolnode->typeId = rescoltype;
rescolnode->typeMod = rescoltypmod;
rescolnode->collation = rescolcoll;
rescolnode->location = bestlocation;
restle = makeTargetEntry((Expr *) rescolnode,
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
0, /* no need to set resno */
NULL,
false);
*targetlist = lappend(*targetlist, restle);
}
}
return (Node *) op;
}
}
/*
* Process the outputs of the non-recursive term of a recursive union
* to set up the parent CTE's columns
*/
static void
determineRecursiveColTypes(ParseState *pstate, Node *larg, List *nrtargetlist)
{
Node *node;
int leftmostRTI;
Query *leftmostQuery;
List *targetList;
ListCell *left_tlist;
ListCell *nrtl;
int next_resno;
/*
* Find leftmost leaf SELECT
*/
node = larg;
while (node && IsA(node, SetOperationStmt))
node = ((SetOperationStmt *) node)->larg;
Assert(node && IsA(node, RangeTblRef));
leftmostRTI = ((RangeTblRef *) node)->rtindex;
leftmostQuery = rt_fetch(leftmostRTI, pstate->p_rtable)->subquery;
Assert(leftmostQuery != NULL);
/*
2010-02-26 03:01:40 +01:00
* Generate dummy targetlist using column names of leftmost select and
* dummy result expressions of the non-recursive term.
*/
targetList = NIL;
next_resno = 1;
forboth(nrtl, nrtargetlist, left_tlist, leftmostQuery->targetList)
{
TargetEntry *nrtle = (TargetEntry *) lfirst(nrtl);
TargetEntry *lefttle = (TargetEntry *) lfirst(left_tlist);
char *colName;
TargetEntry *tle;
Assert(!lefttle->resjunk);
colName = pstrdup(lefttle->resname);
tle = makeTargetEntry(nrtle->expr,
next_resno++,
colName,
false);
targetList = lappend(targetList, tle);
}
/* Now build CTE's output column info using dummy targetlist */
analyzeCTETargetList(pstate, pstate->p_parent_cte, targetList);
}
/*
* transformUpdateStmt -
* transforms an update statement
*/
static Query *
1998-01-09 21:06:08 +01:00
transformUpdateStmt(ParseState *pstate, UpdateStmt *stmt)
{
Query *qry = makeNode(Query);
ParseNamespaceItem *nsitem;
Node *qual;
qry->commandType = CMD_UPDATE;
pstate->p_is_insert = false;
/* process the WITH clause independently of all else */
if (stmt->withClause)
{
qry->hasRecursive = stmt->withClause->recursive;
qry->cteList = transformWithClause(pstate, stmt->withClause);
qry->hasModifyingCTE = pstate->p_hasModifyingCTE;
}
qry->resultRelation = setTargetTable(pstate, stmt->relation,
stmt->relation->inh,
true,
ACL_UPDATE);
/* grab the namespace item made by setTargetTable */
nsitem = (ParseNamespaceItem *) llast(pstate->p_namespace);
/* subqueries in FROM cannot access the result relation */
nsitem->p_lateral_only = true;
nsitem->p_lateral_ok = false;
/*
2005-10-15 04:49:52 +02:00
* the FROM clause is non-standard SQL syntax. We used to be able to do
* this with REPLACE in POSTQUEL so we keep the feature.
*/
transformFromClause(pstate, stmt->fromClause);
/* remaining clauses can reference the result relation normally */
nsitem->p_lateral_only = false;
nsitem->p_lateral_ok = true;
qual = transformWhereClause(pstate, stmt->whereClause,
EXPR_KIND_WHERE, "WHERE");
qry->returningList = transformReturningList(pstate, stmt->returningList);
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
/*
* Now we are done with SELECT-like processing, and can get on with
* transforming the target list to match the UPDATE target columns.
*/
qry->targetList = transformUpdateTargetList(pstate, stmt->targetList);
qry->rtable = pstate->p_rtable;
qry->jointree = makeFromExpr(pstate->p_joinlist, qual);
1998-01-10 05:30:11 +01:00
Improve parser's and planner's handling of set-returning functions. Teach the parser to reject misplaced set-returning functions during parse analysis using p_expr_kind, in much the same way as we do for aggregates and window functions (cf commit eaccfded9). While this isn't complete (it misses nesting-based restrictions), it's much better than the previous error reporting for such cases, and it allows elimination of assorted ad-hoc expression_returns_set() error checks. We could add nesting checks later if it seems important to catch all cases at parse time. There is one case the parser will now throw error for although previous versions allowed it, which is SRFs in the tlist of an UPDATE. That never behaved sensibly (since it's ill-defined which generated row should be used to perform the update) and it's hard to see why it should not be treated as an error. It's a release-note-worthy change though. Also, add a new Query field hasTargetSRFs reporting whether there are any SRFs in the targetlist (including GROUP BY/ORDER BY expressions). The parser can now set that basically for free during parse analysis, and we can use it in a number of places to avoid expression_returns_set searches. (There will be more such checks soon.) In some places, this allows decontorting the logic since it's no longer expensive to check for SRFs in the tlist --- so I made the checks parallel to the handling of hasAggs/hasWindowFuncs wherever it seemed appropriate. catversion bump because adding a Query field changes stored rules. Andres Freund and Tom Lane Discussion: <24639.1473782855@sss.pgh.pa.us>
2016-09-13 19:54:24 +02:00
qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasSubLinks = pstate->p_hasSubLinks;
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
assign_query_collations(pstate, qry);
return qry;
}
/*
* transformUpdateTargetList -
* handle SET clause in UPDATE/INSERT ... ON CONFLICT UPDATE
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
*/
static List *
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
transformUpdateTargetList(ParseState *pstate, List *origTlist)
{
2015-05-24 03:35:49 +02:00
List *tlist = NIL;
RangeTblEntry *target_rte;
ListCell *orig_tl;
ListCell *tl;
TupleDesc tupdesc = pstate->p_target_relation->rd_att;
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
tlist = transformTargetList(pstate, origTlist,
EXPR_KIND_UPDATE_SOURCE);
/* Prepare to assign non-conflicting resnos to resjunk attributes */
if (pstate->p_next_resno <= RelationGetNumberOfAttributes(pstate->p_target_relation))
pstate->p_next_resno = RelationGetNumberOfAttributes(pstate->p_target_relation) + 1;
/* Prepare non-junk columns for assignment to target table */
target_rte = pstate->p_target_rangetblentry;
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
orig_tl = list_head(origTlist);
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
foreach(tl, tlist)
{
TargetEntry *tle = (TargetEntry *) lfirst(tl);
ResTarget *origTarget;
2006-10-04 02:30:14 +02:00
int attrno;
if (tle->resjunk)
{
/*
2005-10-15 04:49:52 +02:00
* Resjunk nodes need no additional processing, but be sure they
* have resnos that do not match any target columns; else rewriter
* or planner might get confused. They don't need a resname
* either.
*/
tle->resno = (AttrNumber) pstate->p_next_resno++;
tle->resname = NULL;
continue;
}
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
if (orig_tl == NULL)
elog(ERROR, "UPDATE target count mismatch --- internal error");
origTarget = lfirst_node(ResTarget, orig_tl);
attrno = attnameAttNum(pstate->p_target_relation,
origTarget->name, true);
if (attrno == InvalidAttrNumber)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_COLUMN),
errmsg("column \"%s\" of relation \"%s\" does not exist",
origTarget->name,
RelationGetRelationName(pstate->p_target_relation)),
parser_errposition(pstate, origTarget->location)));
updateTargetListEntry(pstate, tle, origTarget->name,
attrno,
origTarget->indirection,
origTarget->location);
/* Mark the target column as requiring update permissions */
target_rte->updatedCols = bms_add_member(target_rte->updatedCols,
attrno - FirstLowInvalidHeapAttributeNumber);
Represent Lists as expansible arrays, not chains of cons-cells. Originally, Postgres Lists were a more or less exact reimplementation of Lisp lists, which consist of chains of separately-allocated cons cells, each having a value and a next-cell link. We'd hacked that once before (commit d0b4399d8) to add a separate List header, but the data was still in cons cells. That makes some operations -- notably list_nth() -- O(N), and it's bulky because of the next-cell pointers and per-cell palloc overhead, and it's very cache-unfriendly if the cons cells end up scattered around rather than being adjacent. In this rewrite, we still have List headers, but the data is in a resizable array of values, with no next-cell links. Now we need at most two palloc's per List, and often only one, since we can allocate some values in the same palloc call as the List header. (Of course, extending an existing List may require repalloc's to enlarge the array. But this involves just O(log N) allocations not O(N).) Of course this is not without downsides. The key difficulty is that addition or deletion of a list entry may now cause other entries to move, which it did not before. For example, that breaks foreach() and sister macros, which historically used a pointer to the current cons-cell as loop state. We can repair those macros transparently by making their actual loop state be an integer list index; the exposed "ListCell *" pointer is no longer state carried across loop iterations, but is just a derived value. (In practice, modern compilers can optimize things back to having just one loop state value, at least for simple cases with inline loop bodies.) In principle, this is a semantics change for cases where the loop body inserts or deletes list entries ahead of the current loop index; but I found no such cases in the Postgres code. The change is not at all transparent for code that doesn't use foreach() but chases lists "by hand" using lnext(). The largest share of such code in the backend is in loops that were maintaining "prev" and "next" variables in addition to the current-cell pointer, in order to delete list cells efficiently using list_delete_cell(). However, we no longer need a previous-cell pointer to delete a list cell efficiently. Keeping a next-cell pointer doesn't work, as explained above, but we can improve matters by changing such code to use a regular foreach() loop and then using the new macro foreach_delete_current() to delete the current cell. (This macro knows how to update the associated foreach loop's state so that no cells will be missed in the traversal.) There remains a nontrivial risk of code assuming that a ListCell * pointer will remain good over an operation that could now move the list contents. To help catch such errors, list.c can be compiled with a new define symbol DEBUG_LIST_MEMORY_USAGE that forcibly moves list contents whenever that could possibly happen. This makes list operations significantly more expensive so it's not normally turned on (though it is on by default if USE_VALGRIND is on). There are two notable API differences from the previous code: * lnext() now requires the List's header pointer in addition to the current cell's address. * list_delete_cell() no longer requires a previous-cell argument. These changes are somewhat unfortunate, but on the other hand code using either function needs inspection to see if it is assuming anything it shouldn't, so it's not all bad. Programmers should be aware of these significant performance changes: * list_nth() and related functions are now O(1); so there's no major access-speed difference between a list and an array. * Inserting or deleting a list element now takes time proportional to the distance to the end of the list, due to moving the array elements. (However, it typically *doesn't* require palloc or pfree, so except in long lists it's probably still faster than before.) Notably, lcons() used to be about the same cost as lappend(), but that's no longer true if the list is long. Code that uses lcons() and list_delete_first() to maintain a stack might usefully be rewritten to push and pop at the end of the list rather than the beginning. * There are now list_insert_nth...() and list_delete_nth...() functions that add or remove a list cell identified by index. These have the data-movement penalty explained above, but there's no search penalty. * list_concat() and variants now copy the second list's data into storage belonging to the first list, so there is no longer any sharing of cells between the input lists. The second argument is now declared "const List *" to reflect that it isn't changed. This patch just does the minimum needed to get the new implementation in place and fix bugs exposed by the regression tests. As suggested by the foregoing, there's a fair amount of followup work remaining to do. Also, the ENABLE_LIST_COMPAT macros are finally removed in this commit. Code using those should have been gone a dozen years ago. Patch by me; thanks to David Rowley, Jesper Pedersen, and others for review. Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
2019-07-15 19:41:58 +02:00
orig_tl = lnext(origTlist, orig_tl);
}
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
if (orig_tl != NULL)
elog(ERROR, "UPDATE target count mismatch --- internal error");
/*
* Record in extraUpdatedCols generated columns referencing updated base
* columns.
*/
if (tupdesc->constr &&
tupdesc->constr->has_generated_stored)
{
for (int i = 0; i < tupdesc->constr->num_defval; i++)
{
AttrDefault defval = tupdesc->constr->defval[i];
Node *expr;
Bitmapset *attrs_used = NULL;
/* skip if not generated column */
if (!TupleDescAttr(tupdesc, defval.adnum - 1)->attgenerated)
continue;
expr = stringToNode(defval.adbin);
pull_varattnos(expr, 1, &attrs_used);
if (bms_overlap(target_rte->updatedCols, attrs_used))
target_rte->extraUpdatedCols = bms_add_member(target_rte->extraUpdatedCols,
defval.adnum - FirstLowInvalidHeapAttributeNumber);
}
}
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
return tlist;
}
Hi! INTERSECT and EXCEPT is available for postgresql-v6.4! The patch against v6.4 is included at the end of the current text (in uuencoded form!) I also included the text of my Master's Thesis. (a postscript version). I hope that you find something of it useful and would be happy if parts of it find their way into the PostgreSQL documentation project (If so, tell me, then I send the sources of the document!) The contents of the document are: -) The first chapter might be of less interest as it gives only an overview on SQL. -) The second chapter gives a description on much of PostgreSQL's features (like user defined types etc. and how to use these features) -) The third chapter starts with an overview of PostgreSQL's internal structure with focus on the stages a query has to pass (i.e. parser, planner/optimizer, executor). Then a detailed description of the implementation of the Having clause and the Intersect/Except logic is given. Originally I worked on v6.3.2 but never found time enough to prepare and post a patch. Now I applied the changes to v6.4 to get Intersect and Except working with the new version. Chapter 3 of my documentation deals with the changes against v6.3.2, so keep that in mind when comparing the parts of the code printed there with the patched sources of v6.4. Here are some remarks on the patch. There are some things that have still to be done but at the moment I don't have time to do them myself. (I'm doing my military service at the moment) Sorry for that :-( -) I used a rewrite technique for the implementation of the Except/Intersect logic which rewrites the query to a semantically equivalent query before it is handed to the rewrite system (for views, rules etc.), planner, executor etc. -) In v6.3.2 the types of the attributes of two select statements connected by the UNION keyword had to match 100%. In v6.4 the types only need to be familiar (i.e. int and float can be mixed). Since this feature did not exist when I worked on Intersect/Except it does not work correctly for Except/Intersect queries WHEN USED IN COMBINATION WITH UNIONS! (i.e. sometimes the wrong type is used for the resulting table. This is because until now the types of the attributes of the first select statement have been used for the resulting table. When Intersects and/or Excepts are used in combination with Unions it might happen, that the first select statement of the original query appears at another position in the query which will be executed. The reason for this is the technique used for the implementation of Except/Intersect which does a query rewrite!) NOTE: It is NOT broken for pure UNION queries and pure INTERSECT/EXCEPT queries!!! -) I had to add the field intersect_clause to some data structures but did not find time to implement printfuncs for the new field. This does NOT break the debug modes but when an Except/Intersect is used the query debug output will be the already rewritten query. -) Massive changes to the grammar rules for SELECT and INSERT statements have been necessary (see comments in gram.y and documentation for deatails) in order to be able to use mixed queries like (SELECT ... UNION (SELECT ... EXCEPT SELECT)) INTERSECT SELECT...; -) When using UNION/EXCEPT/INTERSECT you will get: NOTICE: equal: "Don't know if nodes of type xxx are equal". I did not have time to add comparsion support for all the needed nodes, but the default behaviour of the function equal met my requirements. I did not dare to supress this message! That's the reason why the regression test for union will fail: These messages are also included in the union.out file! -) Somebody of you changed the union_planner() function for v6.4 (I copied the targetlist to new_tlist and that was removed and replaced by a cleanup of the original targetlist). These chnages violated some having queries executed against views so I changed it back again. I did not have time to examine the differences between the two versions but now it works :-) If you want to find out, try the file queries/view_having.sql on both versions and compare the results . Two queries won't produce a correct result with your version. regards Stefan
1999-01-18 01:10:17 +01:00
/*
* transformReturningList -
* handle a RETURNING clause in INSERT/UPDATE/DELETE
*/
static List *
transformReturningList(ParseState *pstate, List *returningList)
{
List *rlist;
int save_next_resno;
if (returningList == NIL)
return NIL; /* nothing to do */
/*
2006-10-04 02:30:14 +02:00
* We need to assign resnos starting at one in the RETURNING list. Save
* and restore the main tlist's value of p_next_resno, just in case
* someone looks at it later (probably won't happen).
*/
save_next_resno = pstate->p_next_resno;
pstate->p_next_resno = 1;
/* transform RETURNING identically to a SELECT targetlist */
rlist = transformTargetList(pstate, returningList, EXPR_KIND_RETURNING);
/*
* Complain if the nonempty tlist expanded to nothing (which is possible
* if it contains only a star-expansion of a zero-column table). If we
* allow this, the parsed Query will look like it didn't have RETURNING,
* with results that would probably surprise the user.
*/
if (rlist == NIL)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("RETURNING must have at least one column"),
parser_errposition(pstate,
exprLocation(linitial(returningList)))));
/* mark column origins */
markTargetListOrigins(pstate, rlist);
Change unknown-type literals to type text in SELECT and RETURNING lists. Previously, we left such literals alone if the query or subquery had no properties forcing a type decision to be made (such as an ORDER BY or DISTINCT clause using that output column). This meant that "unknown" could be an exposed output column type, which has never been a great idea because it could result in strange failures later on. For example, an outer query that tried to do any operations on an unknown-type subquery output would generally fail with some weird error like "failed to find conversion function from unknown to text" or "could not determine which collation to use for string comparison". Also, if the case occurred in a CREATE VIEW's query then the view would have an unknown-type column, causing similar failures in queries trying to use the view. To fix, at the tail end of parse analysis of a query, forcibly convert any remaining "unknown" literals in its SELECT or RETURNING list to type text. However, provide a switch to suppress that, and use it in the cases of SELECT inside a set operation or INSERT command. In those cases we already had type resolution rules that make use of context information from outside the subquery proper, and we don't want to change that behavior. Also, change creation of an unknown-type column in a relation from a warning to a hard error. The error should be unreachable now in CREATE VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown" in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because it's nothing but a foot-gun. This change creates a pg_upgrade failure case: a matview that contains an unknown-type column can't be pg_upgraded, because reparsing the matview's defining query will now decide that the column is of type text, which doesn't match the cstring-like storage that the old materialized column would actually have. Add a checking pass to detect that. While at it, we can detect tables or composite types that would fail, essentially for free. Those would fail safely anyway later on, but we might as well fail earlier. This patch is by me, but it owes something to previous investigations by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for review. Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
2017-01-25 15:17:18 +01:00
/* resolve any still-unresolved output columns as being type text */
if (pstate->p_resolve_unknowns)
resolveTargetListUnknowns(pstate, rlist);
/* restore state */
pstate->p_next_resno = save_next_resno;
return rlist;
}
/*
* transformDeclareCursorStmt -
* transform a DECLARE CURSOR Statement
*
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
* DECLARE CURSOR is like other utility statements in that we emit it as a
* CMD_UTILITY Query node; however, we must first transform the contained
* query. We used to postpone that until execution, but it's really necessary
* to do it during the normal parse analysis phase to ensure that side effects
* of parser hooks happen at the expected time.
*/
static Query *
transformDeclareCursorStmt(ParseState *pstate, DeclareCursorStmt *stmt)
{
Query *result;
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
Query *query;
/*
* Don't allow both SCROLL and NO SCROLL to be specified
*/
if ((stmt->options & CURSOR_OPT_SCROLL) &&
(stmt->options & CURSOR_OPT_NO_SCROLL))
ereport(ERROR,
(errcode(ERRCODE_INVALID_CURSOR_DEFINITION),
errmsg("cannot specify both SCROLL and NO SCROLL")));
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
/* Transform contained query, not allowing SELECT INTO */
query = transformStmt(pstate, stmt->query);
stmt->query = (Node *) query;
/* Grammar should not have allowed anything but SELECT */
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
if (!IsA(query, Query) ||
query->commandType != CMD_SELECT)
elog(ERROR, "unexpected non-SELECT command in DECLARE CURSOR");
/*
* We also disallow data-modifying WITH in a cursor. (This could be
* allowed, but the semantics of when the updates occur might be
* surprising.)
*/
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
if (query->hasModifyingCTE)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("DECLARE CURSOR must not contain data-modifying statements in WITH")));
/* FOR UPDATE and WITH HOLD are not compatible */
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
if (query->rowMarks != NIL && (stmt->options & CURSOR_OPT_HOLD))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("DECLARE CURSOR WITH HOLD ... %s is not supported",
LCS_asString(((RowMarkClause *)
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
linitial(query->rowMarks))->strength)),
errdetail("Holdable cursors must be READ ONLY.")));
/* FOR UPDATE and SCROLL are not compatible */
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
if (query->rowMarks != NIL && (stmt->options & CURSOR_OPT_SCROLL))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("DECLARE SCROLL CURSOR ... %s is not supported",
LCS_asString(((RowMarkClause *)
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
linitial(query->rowMarks))->strength)),
errdetail("Scrollable cursors must be READ ONLY.")));
/* FOR UPDATE and INSENSITIVE are not compatible */
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
if (query->rowMarks != NIL && (stmt->options & CURSOR_OPT_INSENSITIVE))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("DECLARE INSENSITIVE CURSOR ... %s is not supported",
LCS_asString(((RowMarkClause *)
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
linitial(query->rowMarks))->strength)),
errdetail("Insensitive cursors must be READ ONLY.")));
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
/* represent the command as a utility Query */
result = makeNode(Query);
result->commandType = CMD_UTILITY;
result->utilityStmt = (Node *) stmt;
return result;
}
/*
* transformExplainStmt -
* transform an EXPLAIN Statement
*
* EXPLAIN is like other utility statements in that we emit it as a
* CMD_UTILITY Query node; however, we must first transform the contained
* query. We used to postpone that until execution, but it's really necessary
* to do it during the normal parse analysis phase to ensure that side effects
* of parser hooks happen at the expected time.
*/
static Query *
transformExplainStmt(ParseState *pstate, ExplainStmt *stmt)
{
Query *result;
/* transform contained query, allowing SELECT INTO */
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
stmt->query = (Node *) transformOptionalSelectInto(pstate, stmt->query);
/* represent the command as a utility Query */
result = makeNode(Query);
result->commandType = CMD_UTILITY;
result->utilityStmt = (Node *) stmt;
return result;
}
/*
* transformCreateTableAsStmt -
* transform a CREATE TABLE AS, SELECT ... INTO, or CREATE MATERIALIZED VIEW
* Statement
*
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
* As with DECLARE CURSOR and EXPLAIN, transform the contained statement now.
*/
static Query *
transformCreateTableAsStmt(ParseState *pstate, CreateTableAsStmt *stmt)
{
Query *result;
Query *query;
Change representation of statement lists, and add statement location info. This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
/* transform contained query, not allowing SELECT INTO */
query = transformStmt(pstate, stmt->query);
stmt->query = (Node *) query;
/* additional work needed for CREATE MATERIALIZED VIEW */
if (stmt->relkind == OBJECT_MATVIEW)
{
/*
* Prohibit a data-modifying CTE in the query used to create a
* materialized view. It's not sufficiently clear what the user would
* want to happen if the MV is refreshed or incrementally maintained.
*/
if (query->hasModifyingCTE)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("materialized views must not use data-modifying statements in WITH")));
/*
* Check whether any temporary database objects are used in the
* creation query. It would be hard to refresh data or incrementally
* maintain it if a source disappeared.
*/
if (isQueryUsingTempRelation(query))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("materialized views must not use temporary tables or views")));
/*
* A materialized view would either need to save parameters for use in
* maintaining/loading the data or prohibit them entirely. The latter
* seems safer and more sane.
*/
if (query_contains_extern_params(query))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("materialized views may not be defined using bound parameters")));
/*
* For now, we disallow unlogged materialized views, because it seems
* like a bad idea for them to just go to empty after a crash. (If we
* could mark them as unpopulated, that would be better, but that
* requires catalog changes which crash recovery can't presently
* handle.)
*/
if (stmt->into->rel->relpersistence == RELPERSISTENCE_UNLOGGED)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("materialized views cannot be unlogged")));
/*
* At runtime, we'll need a copy of the parsed-but-not-rewritten Query
* for purposes of creating the view's ON SELECT rule. We stash that
* in the IntoClause because that's where intorel_startup() can
* conveniently get it from.
*/
stmt->into->viewQuery = (Node *) copyObject(query);
}
/* represent the command as a utility Query */
result = makeNode(Query);
result->commandType = CMD_UTILITY;
result->utilityStmt = (Node *) stmt;
return result;
}
/*
* transform a CallStmt
*
* We need to do parse analysis on the procedure call and its arguments.
*/
static Query *
transformCallStmt(ParseState *pstate, CallStmt *stmt)
{
List *targs;
ListCell *lc;
Node *node;
Query *result;
targs = NIL;
foreach(lc, stmt->funccall->args)
{
targs = lappend(targs, transformExpr(pstate,
(Node *) lfirst(lc),
EXPR_KIND_CALL_ARGUMENT));
}
node = ParseFuncOrColumn(pstate,
stmt->funccall->funcname,
targs,
pstate->p_last_srf,
stmt->funccall,
true,
stmt->funccall->location);
assign_expr_collations(pstate, node);
stmt->funcexpr = castNode(FuncExpr, node);
result = makeNode(Query);
result->commandType = CMD_UTILITY;
result->utilityStmt = (Node *) stmt;
return result;
}
2015-03-15 23:41:47 +01:00
/*
* Produce a string representation of a LockClauseStrength value.
* This should only be applied to valid values (not LCS_NONE).
*/
const char *
LCS_asString(LockClauseStrength strength)
{
switch (strength)
{
2015-03-15 23:41:47 +01:00
case LCS_NONE:
Assert(false);
break;
case LCS_FORKEYSHARE:
return "FOR KEY SHARE";
case LCS_FORSHARE:
return "FOR SHARE";
case LCS_FORNOKEYUPDATE:
return "FOR NO KEY UPDATE";
case LCS_FORUPDATE:
return "FOR UPDATE";
}
return "FOR some"; /* shouldn't happen */
}
/*
* Check for features that are not supported with FOR [KEY] UPDATE/SHARE.
*
* exported so planner can check again after rewriting, query pullup, etc
*/
1999-01-25 13:01:19 +01:00
void
CheckSelectLocking(Query *qry, LockClauseStrength strength)
1999-01-25 13:01:19 +01:00
{
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
Assert(strength != LCS_NONE); /* else caller error */
2015-03-15 23:41:47 +01:00
if (qry->setOperations)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s is not allowed with UNION/INTERSECT/EXCEPT",
LCS_asString(strength))));
if (qry->distinctClause != NIL)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s is not allowed with DISTINCT clause",
LCS_asString(strength))));
if (qry->groupClause != NIL)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s is not allowed with GROUP BY clause",
LCS_asString(strength))));
if (qry->havingQual != NULL)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s is not allowed with HAVING clause",
LCS_asString(strength))));
1999-01-25 13:01:19 +01:00
if (qry->hasAggs)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s is not allowed with aggregate functions",
LCS_asString(strength))));
if (qry->hasWindowFuncs)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s is not allowed with window functions",
LCS_asString(strength))));
Improve parser's and planner's handling of set-returning functions. Teach the parser to reject misplaced set-returning functions during parse analysis using p_expr_kind, in much the same way as we do for aggregates and window functions (cf commit eaccfded9). While this isn't complete (it misses nesting-based restrictions), it's much better than the previous error reporting for such cases, and it allows elimination of assorted ad-hoc expression_returns_set() error checks. We could add nesting checks later if it seems important to catch all cases at parse time. There is one case the parser will now throw error for although previous versions allowed it, which is SRFs in the tlist of an UPDATE. That never behaved sensibly (since it's ill-defined which generated row should be used to perform the update) and it's hard to see why it should not be treated as an error. It's a release-note-worthy change though. Also, add a new Query field hasTargetSRFs reporting whether there are any SRFs in the targetlist (including GROUP BY/ORDER BY expressions). The parser can now set that basically for free during parse analysis, and we can use it in a number of places to avoid expression_returns_set searches. (There will be more such checks soon.) In some places, this allows decontorting the logic since it's no longer expensive to check for SRFs in the tlist --- so I made the checks parallel to the handling of hasAggs/hasWindowFuncs wherever it seemed appropriate. catversion bump because adding a Query field changes stored rules. Andres Freund and Tom Lane Discussion: <24639.1473782855@sss.pgh.pa.us>
2016-09-13 19:54:24 +02:00
if (qry->hasTargetSRFs)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s is not allowed with set-returning functions in the target list",
LCS_asString(strength))));
1999-01-25 13:01:19 +01:00
}
/*
Improve concurrency of foreign key locking This patch introduces two additional lock modes for tuples: "SELECT FOR KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each other, in contrast with already existing "SELECT FOR SHARE" and "SELECT FOR UPDATE". UPDATE commands that do not modify the values stored in the columns that are part of the key of the tuple now grab a SELECT FOR NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently with tuple locks of the FOR KEY SHARE variety. Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this means the concurrency improvement applies to them, which is the whole point of this patch. The added tuple lock semantics require some rejiggering of the multixact module, so that the locking level that each transaction is holding can be stored alongside its Xid. Also, multixacts now need to persist across server restarts and crashes, because they can now represent not only tuple locks, but also tuple updates. This means we need more careful tracking of lifetime of pg_multixact SLRU files; since they now persist longer, we require more infrastructure to figure out when they can be removed. pg_upgrade also needs to be careful to copy pg_multixact files over from the old server to the new, or at least part of multixact.c state, depending on the versions of the old and new servers. Tuple time qualification rules (HeapTupleSatisfies routines) need to be careful not to consider tuples with the "is multi" infomask bit set as being only locked; they might need to look up MultiXact values (i.e. possibly do pg_multixact I/O) to find out the Xid that updated a tuple, whereas they previously were assured to only use information readily available from the tuple header. This is considered acceptable, because the extra I/O would involve cases that would previously cause some commands to block waiting for concurrent transactions to finish. Another important change is the fact that locking tuples that have previously been updated causes the future versions to be marked as locked, too; this is essential for correctness of foreign key checks. This causes additional WAL-logging, also (there was previously a single WAL record for a locked tuple; now there are as many as updated copies of the tuple there exist.) With all this in place, contention related to tuples being checked by foreign key rules should be much reduced. As a bonus, the old behavior that a subtransaction grabbing a stronger tuple lock than the parent (sub)transaction held on a given tuple and later aborting caused the weaker lock to be lost, has been fixed. Many new spec files were added for isolation tester framework, to ensure overall behavior is sane. There's probably room for several more tests. There were several reviewers of this patch; in particular, Noah Misch and Andres Freund spent considerable time in it. Original idea for the patch came from Simon Riggs, after a problem report by Joel Jacobson. Most code is from me, with contributions from Marti Raudsepp, Alexander Shulgin, Noah Misch and Andres Freund. This patch was discussed in several pgsql-hackers threads; the most important start at the following message-ids: AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com 1290721684-sup-3951@alvh.no-ip.org 1294953201-sup-2099@alvh.no-ip.org 1320343602-sup-2290@alvh.no-ip.org 1339690386-sup-8927@alvh.no-ip.org 4FE5FF020200002500048A3D@gw.wicourts.gov 4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
* Transform a FOR [KEY] UPDATE/SHARE clause
*
* This basically involves replacing names by integer relids.
*
* NB: if you need to change this, see also markQueryForLocking()
* in rewriteHandler.c, and isLockedRefname() in parse_relation.c.
*/
1999-01-21 17:08:55 +01:00
static void
transformLockingClause(ParseState *pstate, Query *qry, LockingClause *lc,
bool pushedDown)
1999-01-21 17:08:55 +01:00
{
List *lockedRels = lc->lockedRels;
ListCell *l;
ListCell *rt;
1999-01-21 17:08:55 +01:00
Index i;
LockingClause *allrels;
1999-01-21 17:08:55 +01:00
CheckSelectLocking(qry, lc->strength);
/* make a clause we can pass down to subqueries to select all rels */
allrels = makeNode(LockingClause);
2005-10-15 04:49:52 +02:00
allrels->lockedRels = NIL; /* indicates all rels */
Improve concurrency of foreign key locking This patch introduces two additional lock modes for tuples: "SELECT FOR KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each other, in contrast with already existing "SELECT FOR SHARE" and "SELECT FOR UPDATE". UPDATE commands that do not modify the values stored in the columns that are part of the key of the tuple now grab a SELECT FOR NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently with tuple locks of the FOR KEY SHARE variety. Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this means the concurrency improvement applies to them, which is the whole point of this patch. The added tuple lock semantics require some rejiggering of the multixact module, so that the locking level that each transaction is holding can be stored alongside its Xid. Also, multixacts now need to persist across server restarts and crashes, because they can now represent not only tuple locks, but also tuple updates. This means we need more careful tracking of lifetime of pg_multixact SLRU files; since they now persist longer, we require more infrastructure to figure out when they can be removed. pg_upgrade also needs to be careful to copy pg_multixact files over from the old server to the new, or at least part of multixact.c state, depending on the versions of the old and new servers. Tuple time qualification rules (HeapTupleSatisfies routines) need to be careful not to consider tuples with the "is multi" infomask bit set as being only locked; they might need to look up MultiXact values (i.e. possibly do pg_multixact I/O) to find out the Xid that updated a tuple, whereas they previously were assured to only use information readily available from the tuple header. This is considered acceptable, because the extra I/O would involve cases that would previously cause some commands to block waiting for concurrent transactions to finish. Another important change is the fact that locking tuples that have previously been updated causes the future versions to be marked as locked, too; this is essential for correctness of foreign key checks. This causes additional WAL-logging, also (there was previously a single WAL record for a locked tuple; now there are as many as updated copies of the tuple there exist.) With all this in place, contention related to tuples being checked by foreign key rules should be much reduced. As a bonus, the old behavior that a subtransaction grabbing a stronger tuple lock than the parent (sub)transaction held on a given tuple and later aborting caused the weaker lock to be lost, has been fixed. Many new spec files were added for isolation tester framework, to ensure overall behavior is sane. There's probably room for several more tests. There were several reviewers of this patch; in particular, Noah Misch and Andres Freund spent considerable time in it. Original idea for the patch came from Simon Riggs, after a problem report by Joel Jacobson. Most code is from me, with contributions from Marti Raudsepp, Alexander Shulgin, Noah Misch and Andres Freund. This patch was discussed in several pgsql-hackers threads; the most important start at the following message-ids: AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com 1290721684-sup-3951@alvh.no-ip.org 1294953201-sup-2099@alvh.no-ip.org 1320343602-sup-2290@alvh.no-ip.org 1339690386-sup-8927@alvh.no-ip.org 4FE5FF020200002500048A3D@gw.wicourts.gov 4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
allrels->strength = lc->strength;
allrels->waitPolicy = lc->waitPolicy;
1999-01-25 13:01:19 +01:00
if (lockedRels == NIL)
1999-01-21 17:08:55 +01:00
{
/* all regular tables used in query */
i = 0;
foreach(rt, qry->rtable)
1999-01-21 17:08:55 +01:00
{
RangeTblEntry *rte = (RangeTblEntry *) lfirst(rt);
++i;
switch (rte->rtekind)
{
case RTE_RELATION:
applyLockingClause(qry, i, lc->strength, lc->waitPolicy,
pushedDown);
rte->requiredPerms |= ACL_SELECT_FOR_UPDATE;
break;
case RTE_SUBQUERY:
applyLockingClause(qry, i, lc->strength, lc->waitPolicy,
pushedDown);
2010-02-26 03:01:40 +01:00
/*
* FOR UPDATE/SHARE of subquery is propagated to all of
2010-02-26 03:01:40 +01:00
* subquery's rels, too. We could do this later (based on
* the marking of the subquery RTE) but it is convenient
* to have local knowledge in each query level about which
* rels need to be opened with RowShareLock.
*/
transformLockingClause(pstate, rte->subquery,
allrels, true);
break;
default:
/* ignore JOIN, SPECIAL, FUNCTION, VALUES, CTE RTEs */
break;
}
1999-01-21 17:08:55 +01:00
}
}
else
1999-01-21 17:08:55 +01:00
{
/* just the named tables */
foreach(l, lockedRels)
1999-01-21 17:08:55 +01:00
{
RangeVar *thisrel = (RangeVar *) lfirst(l);
/* For simplicity we insist on unqualified alias names here */
if (thisrel->catalogname || thisrel->schemaname)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s must specify unqualified relation names",
LCS_asString(lc->strength)),
parser_errposition(pstate, thisrel->location)));
i = 0;
foreach(rt, qry->rtable)
1999-01-21 17:08:55 +01:00
{
RangeTblEntry *rte = (RangeTblEntry *) lfirst(rt);
2000-01-16 09:21:59 +01:00
++i;
if (strcmp(rte->eref->aliasname, thisrel->relname) == 0)
1999-01-21 17:08:55 +01:00
{
switch (rte->rtekind)
{
case RTE_RELATION:
applyLockingClause(qry, i, lc->strength,
lc->waitPolicy, pushedDown);
rte->requiredPerms |= ACL_SELECT_FOR_UPDATE;
break;
case RTE_SUBQUERY:
applyLockingClause(qry, i, lc->strength,
lc->waitPolicy, pushedDown);
/* see comment above */
transformLockingClause(pstate, rte->subquery,
allrels, true);
break;
case RTE_JOIN:
ereport(ERROR,
2005-10-15 04:49:52 +02:00
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s cannot be applied to a join",
LCS_asString(lc->strength)),
parser_errposition(pstate, thisrel->location)));
break;
case RTE_FUNCTION:
ereport(ERROR,
2005-10-15 04:49:52 +02:00
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s cannot be applied to a function",
LCS_asString(lc->strength)),
parser_errposition(pstate, thisrel->location)));
break;
case RTE_TABLEFUNC:
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s cannot be applied to a table function",
LCS_asString(lc->strength)),
parser_errposition(pstate, thisrel->location)));
break;
case RTE_VALUES:
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s cannot be applied to VALUES",
LCS_asString(lc->strength)),
parser_errposition(pstate, thisrel->location)));
break;
case RTE_CTE:
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s cannot be applied to a WITH query",
LCS_asString(lc->strength)),
parser_errposition(pstate, thisrel->location)));
break;
case RTE_NAMEDTUPLESTORE:
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s cannot be applied to a named tuplestore",
LCS_asString(lc->strength)),
parser_errposition(pstate, thisrel->location)));
break;
In the planner, replace an empty FROM clause with a dummy RTE. The fact that "SELECT expression" has no base relations has long been a thorn in the side of the planner. It makes it hard to flatten a sub-query that looks like that, or is a trivial VALUES() item, because the planner generally uses relid sets to identify sub-relations, and such a sub-query would have an empty relid set if we flattened it. prepjointree.c contains some baroque logic that works around this in certain special cases --- but there is a much better answer. We can replace an empty FROM clause with a dummy RTE that acts like a table of one row and no columns, and then there are no such corner cases to worry about. Instead we need some logic to get rid of useless dummy RTEs, but that's simpler and covers more cases than what was there before. For really trivial cases, where the query is just "SELECT expression" and nothing else, there's a hazard that adding the extra RTE makes for a noticeable slowdown; even though it's not much processing, there's not that much for the planner to do overall. However testing says that the penalty is very small, close to the noise level. In more complex queries, this is able to find optimizations that we could not find before. The new RTE type is called RTE_RESULT, since the "scan" plan type it gives rise to is a Result node (the same plan we produced for a "SELECT expression" query before). To avoid confusion, rename the old ResultPath path type to GroupResultPath, reflecting that it's only used in degenerate grouping cases where we know the query produces just one grouped row. (It wouldn't work to unify the two cases, because there are different rules about where the associated quals live during query_planner.) Note: although this touches readfuncs.c, I don't think a catversion bump is required, because the added case can't occur in stored rules, only plans. Patch by me, reviewed by David Rowley and Mark Dilger Discussion: https://postgr.es/m/15944.1521127664@sss.pgh.pa.us
2019-01-28 23:54:10 +01:00
/* Shouldn't be possible to see RTE_RESULT here */
default:
elog(ERROR, "unrecognized RTE type: %d",
(int) rte->rtekind);
break;
}
break; /* out of foreach loop */
1999-01-21 17:08:55 +01:00
}
}
if (rt == NULL)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_TABLE),
/*------
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("relation \"%s\" in %s clause not found in FROM clause",
thisrel->relname,
LCS_asString(lc->strength)),
parser_errposition(pstate, thisrel->location)));
1999-01-21 17:08:55 +01:00
}
}
}
/*
* Record locking info for a single rangetable item
*/
void
applyLockingClause(Query *qry, Index rtindex,
LockClauseStrength strength, LockWaitPolicy waitPolicy,
bool pushedDown)
{
RowMarkClause *rc;
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
Assert(strength != LCS_NONE); /* else caller error */
2015-03-15 23:41:47 +01:00
/* If it's an explicit clause, make sure hasForUpdate gets set */
if (!pushedDown)
qry->hasForUpdate = true;
/* Check for pre-existing entry for same rtindex */
if ((rc = get_parse_rowmark(qry, rtindex)) != NULL)
{
/*
2015-03-15 23:41:47 +01:00
* If the same RTE is specified with more than one locking strength,
* use the strongest. (Reasonable, since you can't take both a shared
* and exclusive lock at the same time; it'll end up being exclusive
* anyway.)
*
2015-03-15 23:41:47 +01:00
* Similarly, if the same RTE is specified with more than one lock
* wait policy, consider that NOWAIT wins over SKIP LOCKED, which in
* turn wins over waiting for the lock (the default). This is a bit
* more debatable but raising an error doesn't seem helpful. (Consider
* for instance SELECT FOR UPDATE NOWAIT from a view that internally
* contains a plain FOR UPDATE spec.) Having NOWAIT win over SKIP
* LOCKED is reasonable since the former throws an error in case of
2015-03-15 23:41:47 +01:00
* coming across a locked tuple, which may be undesirable in some
* cases but it seems better than silently returning inconsistent
* results.
*
* And of course pushedDown becomes false if any clause is explicit.
*/
Improve concurrency of foreign key locking This patch introduces two additional lock modes for tuples: "SELECT FOR KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each other, in contrast with already existing "SELECT FOR SHARE" and "SELECT FOR UPDATE". UPDATE commands that do not modify the values stored in the columns that are part of the key of the tuple now grab a SELECT FOR NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently with tuple locks of the FOR KEY SHARE variety. Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this means the concurrency improvement applies to them, which is the whole point of this patch. The added tuple lock semantics require some rejiggering of the multixact module, so that the locking level that each transaction is holding can be stored alongside its Xid. Also, multixacts now need to persist across server restarts and crashes, because they can now represent not only tuple locks, but also tuple updates. This means we need more careful tracking of lifetime of pg_multixact SLRU files; since they now persist longer, we require more infrastructure to figure out when they can be removed. pg_upgrade also needs to be careful to copy pg_multixact files over from the old server to the new, or at least part of multixact.c state, depending on the versions of the old and new servers. Tuple time qualification rules (HeapTupleSatisfies routines) need to be careful not to consider tuples with the "is multi" infomask bit set as being only locked; they might need to look up MultiXact values (i.e. possibly do pg_multixact I/O) to find out the Xid that updated a tuple, whereas they previously were assured to only use information readily available from the tuple header. This is considered acceptable, because the extra I/O would involve cases that would previously cause some commands to block waiting for concurrent transactions to finish. Another important change is the fact that locking tuples that have previously been updated causes the future versions to be marked as locked, too; this is essential for correctness of foreign key checks. This causes additional WAL-logging, also (there was previously a single WAL record for a locked tuple; now there are as many as updated copies of the tuple there exist.) With all this in place, contention related to tuples being checked by foreign key rules should be much reduced. As a bonus, the old behavior that a subtransaction grabbing a stronger tuple lock than the parent (sub)transaction held on a given tuple and later aborting caused the weaker lock to be lost, has been fixed. Many new spec files were added for isolation tester framework, to ensure overall behavior is sane. There's probably room for several more tests. There were several reviewers of this patch; in particular, Noah Misch and Andres Freund spent considerable time in it. Original idea for the patch came from Simon Riggs, after a problem report by Joel Jacobson. Most code is from me, with contributions from Marti Raudsepp, Alexander Shulgin, Noah Misch and Andres Freund. This patch was discussed in several pgsql-hackers threads; the most important start at the following message-ids: AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com 1290721684-sup-3951@alvh.no-ip.org 1294953201-sup-2099@alvh.no-ip.org 1320343602-sup-2290@alvh.no-ip.org 1339690386-sup-8927@alvh.no-ip.org 4FE5FF020200002500048A3D@gw.wicourts.gov 4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
rc->strength = Max(rc->strength, strength);
rc->waitPolicy = Max(rc->waitPolicy, waitPolicy);
rc->pushedDown &= pushedDown;
return;
}
1999-01-21 17:08:55 +01:00
/* Make a new RowMarkClause */
rc = makeNode(RowMarkClause);
rc->rti = rtindex;
Improve concurrency of foreign key locking This patch introduces two additional lock modes for tuples: "SELECT FOR KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each other, in contrast with already existing "SELECT FOR SHARE" and "SELECT FOR UPDATE". UPDATE commands that do not modify the values stored in the columns that are part of the key of the tuple now grab a SELECT FOR NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently with tuple locks of the FOR KEY SHARE variety. Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this means the concurrency improvement applies to them, which is the whole point of this patch. The added tuple lock semantics require some rejiggering of the multixact module, so that the locking level that each transaction is holding can be stored alongside its Xid. Also, multixacts now need to persist across server restarts and crashes, because they can now represent not only tuple locks, but also tuple updates. This means we need more careful tracking of lifetime of pg_multixact SLRU files; since they now persist longer, we require more infrastructure to figure out when they can be removed. pg_upgrade also needs to be careful to copy pg_multixact files over from the old server to the new, or at least part of multixact.c state, depending on the versions of the old and new servers. Tuple time qualification rules (HeapTupleSatisfies routines) need to be careful not to consider tuples with the "is multi" infomask bit set as being only locked; they might need to look up MultiXact values (i.e. possibly do pg_multixact I/O) to find out the Xid that updated a tuple, whereas they previously were assured to only use information readily available from the tuple header. This is considered acceptable, because the extra I/O would involve cases that would previously cause some commands to block waiting for concurrent transactions to finish. Another important change is the fact that locking tuples that have previously been updated causes the future versions to be marked as locked, too; this is essential for correctness of foreign key checks. This causes additional WAL-logging, also (there was previously a single WAL record for a locked tuple; now there are as many as updated copies of the tuple there exist.) With all this in place, contention related to tuples being checked by foreign key rules should be much reduced. As a bonus, the old behavior that a subtransaction grabbing a stronger tuple lock than the parent (sub)transaction held on a given tuple and later aborting caused the weaker lock to be lost, has been fixed. Many new spec files were added for isolation tester framework, to ensure overall behavior is sane. There's probably room for several more tests. There were several reviewers of this patch; in particular, Noah Misch and Andres Freund spent considerable time in it. Original idea for the patch came from Simon Riggs, after a problem report by Joel Jacobson. Most code is from me, with contributions from Marti Raudsepp, Alexander Shulgin, Noah Misch and Andres Freund. This patch was discussed in several pgsql-hackers threads; the most important start at the following message-ids: AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com 1290721684-sup-3951@alvh.no-ip.org 1294953201-sup-2099@alvh.no-ip.org 1320343602-sup-2290@alvh.no-ip.org 1339690386-sup-8927@alvh.no-ip.org 4FE5FF020200002500048A3D@gw.wicourts.gov 4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
rc->strength = strength;
rc->waitPolicy = waitPolicy;
rc->pushedDown = pushedDown;
qry->rowMarks = lappend(qry->rowMarks, rc);
1999-01-21 17:08:55 +01:00
}
/*
* Coverage testing for raw_expression_tree_walker().
*
* When enabled, we run raw_expression_tree_walker() over every DML statement
* submitted to parse analysis. Without this provision, that function is only
* applied in limited cases involving CTEs, and we don't really want to have
* to test everything inside as well as outside a CTE.
*/
#ifdef RAW_EXPRESSION_COVERAGE_TEST
static bool
test_raw_expression_coverage(Node *node, void *context)
{
if (node == NULL)
return false;
return raw_expression_tree_walker(node,
test_raw_expression_coverage,
context);
}
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
#endif /* RAW_EXPRESSION_COVERAGE_TEST */