1996-08-28 03:59:28 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
1999-02-14 00:22:53 +01:00
|
|
|
* parsenodes.h
|
1997-09-07 07:04:48 +02:00
|
|
|
* definitions for parse tree nodes
|
1996-08-28 03:59:28 +02:00
|
|
|
*
|
2008-08-29 01:09:48 +02:00
|
|
|
* Many of the node types used in parsetrees include a "location" field.
|
|
|
|
* This is a byte (not character) offset in the original source text, to be
|
|
|
|
* used for positioning an error cursor when there is an error related to
|
|
|
|
* the node. Access to the original source text is needed to make use of
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
|
|
|
* the location. At the topmost (statement) level, we also provide a
|
|
|
|
* statement length, likewise measured in bytes, for convenience in
|
|
|
|
* identifying statement boundaries in multi-statement source strings.
|
2008-08-29 01:09:48 +02:00
|
|
|
*
|
1996-08-28 03:59:28 +02:00
|
|
|
*
|
2020-01-01 18:21:45 +01:00
|
|
|
* Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group
|
2000-01-26 06:58:53 +01:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
1996-08-28 03:59:28 +02:00
|
|
|
*
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/include/nodes/parsenodes.h
|
1996-08-28 03:59:28 +02:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
1997-09-07 07:04:48 +02:00
|
|
|
#ifndef PARSENODES_H
|
|
|
|
#define PARSENODES_H
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2009-01-22 21:16:10 +01:00
|
|
|
#include "nodes/bitmapset.h"
|
2015-03-15 20:19:04 +01:00
|
|
|
#include "nodes/lockoptions.h"
|
1999-07-16 01:04:24 +02:00
|
|
|
#include "nodes/primnodes.h"
|
2006-07-13 18:49:20 +02:00
|
|
|
#include "nodes/value.h"
|
2018-04-15 02:12:14 +02:00
|
|
|
#include "partitioning/partdefs.h"
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
|
2017-04-06 14:33:16 +02:00
|
|
|
typedef enum OverridingKind
|
|
|
|
{
|
|
|
|
OVERRIDING_NOT_SET = 0,
|
|
|
|
OVERRIDING_USER_VALUE,
|
|
|
|
OVERRIDING_SYSTEM_VALUE
|
|
|
|
} OverridingKind;
|
|
|
|
|
2002-10-15 00:14:35 +02:00
|
|
|
/* Possible sources of a Query */
|
|
|
|
typedef enum QuerySource
|
|
|
|
{
|
|
|
|
QSRC_ORIGINAL, /* original parsetree (explicit query) */
|
2018-04-12 12:22:56 +02:00
|
|
|
QSRC_PARSER, /* added by parse analysis (now unused) */
|
2002-10-15 00:14:35 +02:00
|
|
|
QSRC_INSTEAD_RULE, /* added by unconditional INSTEAD rule */
|
|
|
|
QSRC_QUAL_INSTEAD_RULE, /* added by conditional INSTEAD rule */
|
|
|
|
QSRC_NON_INSTEAD_RULE /* added by non-INSTEAD rule */
|
2003-08-08 23:42:59 +02:00
|
|
|
} QuerySource;
|
2002-10-15 00:14:35 +02:00
|
|
|
|
2007-01-09 03:14:16 +01:00
|
|
|
/* Sort ordering options for ORDER BY and CREATE INDEX */
|
|
|
|
typedef enum SortByDir
|
|
|
|
{
|
|
|
|
SORTBY_DEFAULT,
|
|
|
|
SORTBY_ASC,
|
|
|
|
SORTBY_DESC,
|
|
|
|
SORTBY_USING /* not allowed in CREATE INDEX ... */
|
2007-11-15 23:25:18 +01:00
|
|
|
} SortByDir;
|
2007-01-09 03:14:16 +01:00
|
|
|
|
|
|
|
typedef enum SortByNulls
|
|
|
|
{
|
|
|
|
SORTBY_NULLS_DEFAULT,
|
|
|
|
SORTBY_NULLS_FIRST,
|
|
|
|
SORTBY_NULLS_LAST
|
2007-11-15 23:25:18 +01:00
|
|
|
} SortByNulls;
|
2007-01-09 03:14:16 +01:00
|
|
|
|
2014-12-23 19:35:49 +01:00
|
|
|
/*
|
|
|
|
* Grantable rights are encoded so that we can OR them together in a bitmask.
|
|
|
|
* The present representation of AclItem limits us to 16 distinct rights,
|
|
|
|
* even though AclMode is defined as uint32. See utils/acl.h.
|
|
|
|
*
|
|
|
|
* Caution: changing these codes breaks stored ACLs, hence forces initdb.
|
|
|
|
*/
|
|
|
|
typedef uint32 AclMode; /* a bitmask of privilege bits */
|
|
|
|
|
|
|
|
#define ACL_INSERT (1<<0) /* for relations */
|
|
|
|
#define ACL_SELECT (1<<1)
|
|
|
|
#define ACL_UPDATE (1<<2)
|
|
|
|
#define ACL_DELETE (1<<3)
|
|
|
|
#define ACL_TRUNCATE (1<<4)
|
|
|
|
#define ACL_REFERENCES (1<<5)
|
|
|
|
#define ACL_TRIGGER (1<<6)
|
|
|
|
#define ACL_EXECUTE (1<<7) /* for functions */
|
|
|
|
#define ACL_USAGE (1<<8) /* for languages, namespaces, FDWs, and
|
|
|
|
* servers */
|
|
|
|
#define ACL_CREATE (1<<9) /* for namespaces and databases */
|
|
|
|
#define ACL_CREATE_TEMP (1<<10) /* for databases */
|
|
|
|
#define ACL_CONNECT (1<<11) /* for databases */
|
|
|
|
#define N_ACL_RIGHTS 12 /* 1 plus the last 1<<x */
|
|
|
|
#define ACL_NO_RIGHTS 0
|
|
|
|
/* Currently, SELECT ... FOR [KEY] UPDATE/SHARE requires UPDATE privileges */
|
|
|
|
#define ACL_SELECT_FOR_UPDATE ACL_UPDATE
|
|
|
|
|
2002-10-15 00:14:35 +02:00
|
|
|
|
1996-08-28 03:59:28 +02:00
|
|
|
/*****************************************************************************
|
1997-09-07 07:04:48 +02:00
|
|
|
* Query Tree
|
1996-08-28 03:59:28 +02:00
|
|
|
*****************************************************************************/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Query -
|
Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements. The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.
In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs. Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.
Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.
Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.
Andres Freund and Tom Lane
2012-03-20 02:37:19 +01:00
|
|
|
* Parse analysis turns all statements into a Query tree
|
2007-02-20 18:32:18 +01:00
|
|
|
* for further processing by the rewriter and planner.
|
2005-06-06 00:32:58 +02:00
|
|
|
*
|
2007-02-20 18:32:18 +01:00
|
|
|
* Utility statements (i.e. non-optimizable statements) have the
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
|
|
|
* utilityStmt field set, and the rest of the Query is mostly dummy.
|
2007-02-20 18:32:18 +01:00
|
|
|
*
|
|
|
|
* Planning converts a Query tree into a Plan tree headed by a PlannedStmt
|
2007-04-28 00:05:49 +02:00
|
|
|
* node --- the Query structure is not used by the executor.
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
1997-09-07 07:04:48 +02:00
|
|
|
typedef struct Query
|
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2018-04-12 12:22:56 +02:00
|
|
|
CmdType commandType; /* select|insert|update|delete|utility */
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2003-08-04 02:43:34 +02:00
|
|
|
QuerySource querySource; /* where did I come from? */
|
2002-10-15 00:14:35 +02:00
|
|
|
|
2017-10-12 01:52:46 +02:00
|
|
|
uint64 queryId; /* query identifier (can be set by plugins) */
|
2012-03-27 21:14:13 +02:00
|
|
|
|
2003-05-02 22:54:36 +02:00
|
|
|
bool canSetTag; /* do I set the command result tag? */
|
|
|
|
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
|
|
|
Node *utilityStmt; /* non-null if commandType == CMD_UTILITY */
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2006-08-12 22:05:56 +02:00
|
|
|
int resultRelation; /* rtable index of target relation for
|
2018-04-12 12:22:56 +02:00
|
|
|
* INSERT/UPDATE/DELETE; 0 for SELECT */
|
2003-03-10 04:53:52 +01:00
|
|
|
|
1999-10-07 06:23:24 +02:00
|
|
|
bool hasAggs; /* has aggregates in tlist or havingQual */
|
2009-06-11 16:49:15 +02:00
|
|
|
bool hasWindowFuncs; /* has window functions in tlist */
|
2016-09-13 19:54:24 +02:00
|
|
|
bool hasTargetSRFs; /* has set-returning functions in tlist */
|
1998-01-17 05:53:46 +01:00
|
|
|
bool hasSubLinks; /* has subquery SubLink */
|
2008-08-02 23:32:01 +02:00
|
|
|
bool hasDistinctOn; /* distinctClause is from DISTINCT ON */
|
2008-10-04 23:56:55 +02:00
|
|
|
bool hasRecursive; /* WITH RECURSIVE was specified */
|
2011-02-26 00:56:23 +01:00
|
|
|
bool hasModifyingCTE; /* has INSERT/UPDATE/DELETE in WITH */
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
bool hasForUpdate; /* FOR [KEY] UPDATE/SHARE was specified */
|
Avoid invalidating all foreign-join cached plans when user mappings change.
We must not push down a foreign join when the foreign tables involved
should be accessed under different user mappings. Previously we tried
to enforce that rule literally during planning, but that meant that the
resulting plans were dependent on the current contents of the
pg_user_mapping catalog, and we had to blow away all cached plans
containing any remote join when anything at all changed in pg_user_mapping.
This could have been improved somewhat, but the fact that a syscache inval
callback has very limited info about what changed made it hard to do better
within that design. Instead, let's change the planner to not consider user
mappings per se, but to allow a foreign join if both RTEs have the same
checkAsUser value. If they do, then they necessarily will use the same
user mapping at runtime, and we don't need to know specifically which one
that is. Post-plan-time changes in pg_user_mapping no longer require any
plan invalidation.
This rule does give up some optimization ability, to wit where two foreign
table references come from views with different owners or one's from a view
and one's directly in the query, but nonetheless the same user mapping
would have applied. We'll sacrifice the first case, but to not regress
more than we have to in the second case, allow a foreign join involving
both zero and nonzero checkAsUser values if the nonzero one is the same as
the prevailing effective userID. In that case, mark the plan as only
runnable by that userID.
The plancache code already had a notion of plans being userID-specific,
in order to support RLS. It was a little confused though, in particular
lacking clarity of thought as to whether it was the rewritten query or just
the finished plan that's dependent on the userID. Rearrange that code so
that it's clearer what depends on which, and so that the same logic applies
to both RLS-injected role dependency and foreign-join-injected role
dependency.
Note that this patch doesn't remove the other issue mentioned in the
original complaint, which is that while we'll reliably stop using a foreign
join if it's disallowed in a new context, we might fail to start using a
foreign join if it's now allowed, but we previously created a generic
cached plan that didn't use one. It was agreed that the chance of winning
that way was not high enough to justify the much larger number of plan
invalidations that would have to occur if we tried to cause it to happen.
In passing, clean up randomly-varying spelling of EXPLAIN commands in
postgres_fdw.sql, and fix a COSTS ON example that had been allowed to
leak into the committed tests.
This reverts most of commits fbe5a3fb7 and 5d4171d1c, which were the
previous attempt at ensuring we wouldn't push down foreign joins that
span permissions contexts.
Etsuro Fujita and Tom Lane
Discussion: <d49c1e5b-f059-20f4-c132-e9752ee0113e@lab.ntt.co.jp>
2016-07-15 23:22:56 +02:00
|
|
|
bool hasRowSecurity; /* rewriter has applied some RLS policy */
|
2008-10-04 23:56:55 +02:00
|
|
|
|
|
|
|
List *cteList; /* WITH list (of CommonTableExpr's) */
|
1998-02-26 05:46:47 +01:00
|
|
|
|
1997-09-08 04:41:22 +02:00
|
|
|
List *rtable; /* list of range table entries */
|
2005-10-15 04:49:52 +02:00
|
|
|
FromExpr *jointree; /* table join tree (FROM and WHERE clauses) */
|
2000-09-12 23:07:18 +02:00
|
|
|
|
2000-10-05 21:11:39 +02:00
|
|
|
List *targetList; /* target list (of TargetEntry) */
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2017-04-06 14:33:16 +02:00
|
|
|
OverridingKind override; /* OVERRIDING clause */
|
|
|
|
|
2015-05-24 03:35:49 +02:00
|
|
|
OnConflictExpr *onConflict; /* ON CONFLICT DO [NOTHING | UPDATE] */
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
|
2006-08-12 04:52:06 +02:00
|
|
|
List *returningList; /* return-values list (of TargetEntry) */
|
|
|
|
|
2008-08-02 23:32:01 +02:00
|
|
|
List *groupClause; /* a list of SortGroupClause's */
|
1999-08-21 05:49:17 +02:00
|
|
|
|
Support GROUPING SETS, CUBE and ROLLUP.
This SQL standard functionality allows to aggregate data by different
GROUP BY clauses at once. Each grouping set returns rows with columns
grouped by in other sets set to NULL.
This could previously be achieved by doing each grouping as a separate
query, conjoined by UNION ALLs. Besides being considerably more concise,
grouping sets will in many cases be faster, requiring only one scan over
the underlying data.
The current implementation of grouping sets only supports using sorting
for input. Individual sets that share a sort order are computed in one
pass. If there are sets that don't share a sort order, additional sort &
aggregation steps are performed. These additional passes are sourced by
the previous sort step; thus avoiding repeated scans of the source data.
The code is structured in a way that adding support for purely using
hash aggregation or a mix of hashing and sorting is possible. Sorting
was chosen to be supported first, as it is the most generic method of
implementation.
Instead of, as in an earlier versions of the patch, representing the
chain of sort and aggregation steps as full blown planner and executor
nodes, all but the first sort are performed inside the aggregation node
itself. This avoids the need to do some unusual gymnastics to handle
having to return aggregated and non-aggregated tuples from underlying
nodes, as well as having to shut down underlying nodes early to limit
memory usage. The optimizer still builds Sort/Agg node to describe each
phase, but they're not part of the plan tree, but instead additional
data for the aggregation node. They're a convenient and preexisting way
to describe aggregation and sorting. The first (and possibly only) sort
step is still performed as a separate execution step. That retains
similarity with existing group by plans, makes rescans fairly simple,
avoids very deep plans (leading to slow explains) and easily allows to
avoid the sorting step if the underlying data is sorted by other means.
A somewhat ugly side of this patch is having to deal with a grammar
ambiguity between the new CUBE keyword and the cube extension/functions
named cube (and rollup). To avoid breaking existing deployments of the
cube extension it has not been renamed, neither has cube been made a
reserved keyword. Instead precedence hacking is used to make GROUP BY
cube(..) refer to the CUBE grouping sets feature, and not the function
cube(). To actually group by a function cube(), unlikely as that might
be, the function name has to be quoted.
Needs a catversion bump because stored rules may change.
Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund
Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule
Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
2015-05-16 03:40:59 +02:00
|
|
|
List *groupingSets; /* a list of GroupingSet's if present */
|
|
|
|
|
1999-08-21 05:49:17 +02:00
|
|
|
Node *havingQual; /* qualifications applied to groups */
|
Hi!
INTERSECT and EXCEPT is available for postgresql-v6.4!
The patch against v6.4 is included at the end of the current text
(in uuencoded form!)
I also included the text of my Master's Thesis. (a postscript
version). I hope that you find something of it useful and would be
happy if parts of it find their way into the PostgreSQL documentation
project (If so, tell me, then I send the sources of the document!)
The contents of the document are:
-) The first chapter might be of less interest as it gives only an
overview on SQL.
-) The second chapter gives a description on much of PostgreSQL's
features (like user defined types etc. and how to use these features)
-) The third chapter starts with an overview of PostgreSQL's internal
structure with focus on the stages a query has to pass (i.e. parser,
planner/optimizer, executor). Then a detailed description of the
implementation of the Having clause and the Intersect/Except logic is
given.
Originally I worked on v6.3.2 but never found time enough to prepare
and post a patch. Now I applied the changes to v6.4 to get Intersect
and Except working with the new version. Chapter 3 of my documentation
deals with the changes against v6.3.2, so keep that in mind when
comparing the parts of the code printed there with the patched sources
of v6.4.
Here are some remarks on the patch. There are some things that have
still to be done but at the moment I don't have time to do them
myself. (I'm doing my military service at the moment) Sorry for that
:-(
-) I used a rewrite technique for the implementation of the Except/Intersect
logic which rewrites the query to a semantically equivalent query before
it is handed to the rewrite system (for views, rules etc.), planner,
executor etc.
-) In v6.3.2 the types of the attributes of two select statements
connected by the UNION keyword had to match 100%. In v6.4 the types
only need to be familiar (i.e. int and float can be mixed). Since this
feature did not exist when I worked on Intersect/Except it
does not work correctly for Except/Intersect queries WHEN USED IN
COMBINATION WITH UNIONS! (i.e. sometimes the wrong type is used for the
resulting table. This is because until now the types of the attributes of
the first select statement have been used for the resulting table.
When Intersects and/or Excepts are used in combination with Unions it
might happen, that the first select statement of the original query
appears at another position in the query which will be executed. The reason
for this is the technique used for the implementation of
Except/Intersect which does a query rewrite!)
NOTE: It is NOT broken for pure UNION queries and pure INTERSECT/EXCEPT
queries!!!
-) I had to add the field intersect_clause to some data structures
but did not find time to implement printfuncs for the new field.
This does NOT break the debug modes but when an Except/Intersect
is used the query debug output will be the already rewritten query.
-) Massive changes to the grammar rules for SELECT and INSERT statements
have been necessary (see comments in gram.y and documentation for
deatails) in order to be able to use mixed queries like
(SELECT ... UNION (SELECT ... EXCEPT SELECT)) INTERSECT SELECT...;
-) When using UNION/EXCEPT/INTERSECT you will get:
NOTICE: equal: "Don't know if nodes of type xxx are equal".
I did not have time to add comparsion support for all the needed nodes,
but the default behaviour of the function equal met my requirements.
I did not dare to supress this message!
That's the reason why the regression test for union will fail: These
messages are also included in the union.out file!
-) Somebody of you changed the union_planner() function for v6.4
(I copied the targetlist to new_tlist and that was removed and
replaced by a cleanup of the original targetlist). These chnages
violated some having queries executed against views so I changed
it back again. I did not have time to examine the differences between the
two versions but now it works :-)
If you want to find out, try the file queries/view_having.sql on
both versions and compare the results . Two queries won't produce a
correct result with your version.
regards
Stefan
1999-01-18 01:10:17 +01:00
|
|
|
|
2008-12-28 19:54:01 +01:00
|
|
|
List *windowClause; /* a list of WindowClause's */
|
|
|
|
|
2008-08-02 23:32:01 +02:00
|
|
|
List *distinctClause; /* a list of SortGroupClause's */
|
2000-10-05 21:11:39 +02:00
|
|
|
|
2008-08-02 23:32:01 +02:00
|
|
|
List *sortClause; /* a list of SortGroupClause's */
|
1999-08-21 05:49:17 +02:00
|
|
|
|
2006-07-26 21:31:51 +02:00
|
|
|
Node *limitOffset; /* # of result tuples to skip (int8 expr) */
|
|
|
|
Node *limitCount; /* # of result tuples to return (int8 expr) */
|
1997-12-24 07:06:58 +01:00
|
|
|
|
2006-04-30 20:30:40 +02:00
|
|
|
List *rowMarks; /* a list of RowMarkClause's */
|
|
|
|
|
2005-10-15 04:49:52 +02:00
|
|
|
Node *setOperations; /* set-operation tree if this is top level of
|
|
|
|
* a UNION/INTERSECT/EXCEPT query */
|
2010-08-07 04:44:09 +02:00
|
|
|
|
2011-04-10 17:42:00 +02:00
|
|
|
List *constraintDeps; /* a list of pg_constraint OIDs that the query
|
2010-08-07 04:44:09 +02:00
|
|
|
* depends on to be semantically valid */
|
2015-10-05 13:38:58 +02:00
|
|
|
|
2018-09-18 21:08:28 +02:00
|
|
|
List *withCheckOptions; /* a list of WithCheckOption's (added
|
|
|
|
* during rewrite) */
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The following two fields identify the portion of the source text string
|
|
|
|
* containing this query. They are typically only populated in top-level
|
|
|
|
* Queries, not in sub-queries. When not set, they might both be zero, or
|
|
|
|
* both be -1 meaning "unknown".
|
|
|
|
*/
|
|
|
|
int stmt_location; /* start location, or -1 if unknown */
|
|
|
|
int stmt_len; /* length in bytes; 0 means "rest of string" */
|
1997-12-05 00:55:52 +01:00
|
|
|
} Query;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/****************************************************************************
|
|
|
|
* Supporting data structures for Parse Trees
|
1996-08-28 03:59:28 +02:00
|
|
|
*
|
2002-03-08 05:37:18 +01:00
|
|
|
* Most of these node types appear in raw parsetrees output by the grammar,
|
2014-05-06 18:12:18 +02:00
|
|
|
* and get transformed to something else by the analyzer. A few of them
|
2002-03-08 05:37:18 +01:00
|
|
|
* are used as-is in transformed querytrees.
|
|
|
|
****************************************************************************/
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
|
|
|
* TypeName - specifies a type in definitions
|
2002-03-29 20:06:29 +01:00
|
|
|
*
|
|
|
|
* For TypeName structures generated internally, it is often easier to
|
|
|
|
* specify the type by OID than by name. If "names" is NIL then the
|
2009-07-16 08:33:46 +02:00
|
|
|
* actual type OID is given by typeOid, otherwise typeOid is unused.
|
2006-12-30 22:21:56 +01:00
|
|
|
* Similarly, if "typmods" is NIL then the actual typmod is expected to
|
Remove collation information from TypeName, where it does not belong.
The initial collations patch treated a COLLATE spec as part of a TypeName,
following what can only be described as brain fade on the part of the SQL
committee. It's a lot more reasonable to treat COLLATE as a syntactically
separate object, so that it can be added in only the productions where it
actually belongs, rather than needing to reject it in a boatload of places
where it doesn't belong (something the original patch mostly failed to do).
In addition this change lets us meet the spec's requirement to allow
COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly
behavior for constructs such as "foo::type COLLATE collation".
To do this, pull collation information out of TypeName and put it in
ColumnDef instead, thus reverting most of the collation-related changes in
parse_type.c's API. I made one additional structural change, which was to
use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd
nodes. This provides enough room to get rid of the "transform" wart in
AlterTableCmd too, since the ColumnDef can carry the USING expression
easily enough.
Also fix some other minor bugs that have crept in in the same areas,
like failure to copy recently-added fields of ColumnDef in copyfuncs.c.
While at it, document the formerly secret ability to specify a collation
in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and
ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about
what the default collation selection will be when COLLATE is omitted.
BTW, the three-parameter form of format_type() should go away too,
since it just contributes to the confusion in this area; but I'll do
that in a separate patch.
2011-03-10 04:38:52 +01:00
|
|
|
* be prespecified in typemod, otherwise typemod is unused.
|
2002-03-29 20:06:29 +01:00
|
|
|
*
|
2017-08-16 06:22:32 +02:00
|
|
|
* If pct_type is true, then names is actually a field name and we look up
|
2014-05-06 18:12:18 +02:00
|
|
|
* the type of that field. Otherwise (the normal case), names is a type
|
2002-03-29 20:06:29 +01:00
|
|
|
* name possibly qualified with schema and database name.
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct TypeName
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-29 20:06:29 +01:00
|
|
|
List *names; /* qualified name (list of Value strings) */
|
2009-07-16 08:33:46 +02:00
|
|
|
Oid typeOid; /* type identified by OID */
|
2002-03-08 05:37:18 +01:00
|
|
|
bool setof; /* is a set? */
|
2002-03-29 20:06:29 +01:00
|
|
|
bool pct_type; /* %TYPE specified? */
|
2006-12-30 22:21:56 +01:00
|
|
|
List *typmods; /* type modifier expression(s) */
|
|
|
|
int32 typemod; /* prespecified type modifier */
|
2002-03-08 05:37:18 +01:00
|
|
|
List *arrayBounds; /* array bounds */
|
2006-03-14 23:48:25 +01:00
|
|
|
int location; /* token location, or -1 if unknown */
|
2002-03-08 05:37:18 +01:00
|
|
|
} TypeName;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
2002-03-21 17:02:16 +01:00
|
|
|
* ColumnRef - specifies a reference to a column, or possibly a whole tuple
|
|
|
|
*
|
2014-05-06 18:12:18 +02:00
|
|
|
* The "fields" list must be nonempty. It can contain string Value nodes
|
2008-08-30 03:39:14 +02:00
|
|
|
* (representing names) and A_Star nodes (representing occurrence of a '*').
|
|
|
|
* Currently, A_Star must appear only as the last list element --- the grammar
|
|
|
|
* is responsible for enforcing this!
|
2004-06-09 21:08:20 +02:00
|
|
|
*
|
2019-02-01 16:50:32 +01:00
|
|
|
* Note: any container subscripting or selection of fields from composite columns
|
2004-06-09 21:08:20 +02:00
|
|
|
* is represented by an A_Indirection node above the ColumnRef. However,
|
|
|
|
* for simplicity in the normal case, initial field selection from a table
|
|
|
|
* name is represented within ColumnRef and not by adding A_Indirection.
|
2002-03-21 17:02:16 +01:00
|
|
|
*/
|
|
|
|
typedef struct ColumnRef
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2008-08-30 03:39:14 +02:00
|
|
|
List *fields; /* field names (Value strings) or A_Star */
|
2006-03-14 23:48:25 +01:00
|
|
|
int location; /* token location, or -1 if unknown */
|
2002-03-21 17:02:16 +01:00
|
|
|
} ColumnRef;
|
|
|
|
|
|
|
|
/*
|
2004-06-09 21:08:20 +02:00
|
|
|
* ParamRef - specifies a $n parameter reference
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2002-03-21 17:02:16 +01:00
|
|
|
typedef struct ParamRef
|
2001-06-10 01:21:55 +02:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2002-03-08 05:37:18 +01:00
|
|
|
int number; /* the number of the parameter */
|
2008-08-29 01:09:48 +02:00
|
|
|
int location; /* token location, or -1 if unknown */
|
2002-03-21 17:02:16 +01:00
|
|
|
} ParamRef;
|
2001-06-10 01:21:55 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
2003-02-10 05:44:47 +01:00
|
|
|
* A_Expr - infix, prefix, and postfix expressions
|
2002-03-08 05:37:18 +01:00
|
|
|
*/
|
2003-02-10 05:44:47 +01:00
|
|
|
typedef enum A_Expr_Kind
|
|
|
|
{
|
|
|
|
AEXPR_OP, /* normal operator */
|
2003-06-29 02:33:44 +02:00
|
|
|
AEXPR_OP_ANY, /* scalar op ANY (array) */
|
|
|
|
AEXPR_OP_ALL, /* scalar op ALL (array) */
|
2003-02-10 05:44:47 +01:00
|
|
|
AEXPR_DISTINCT, /* IS DISTINCT FROM - name must be "=" */
|
2016-07-28 23:23:03 +02:00
|
|
|
AEXPR_NOT_DISTINCT, /* IS NOT DISTINCT FROM - name must be "=" */
|
2003-02-16 03:30:39 +01:00
|
|
|
AEXPR_NULLIF, /* NULLIF - name must be "=" */
|
2005-11-28 05:35:32 +01:00
|
|
|
AEXPR_OF, /* IS [NOT] OF - name must be "=" or "<>" */
|
2015-02-22 19:57:56 +01:00
|
|
|
AEXPR_IN, /* [NOT] IN - name must be "=" or "<>" */
|
2015-02-23 18:46:46 +01:00
|
|
|
AEXPR_LIKE, /* [NOT] LIKE - name must be "~~" or "!~~" */
|
|
|
|
AEXPR_ILIKE, /* [NOT] ILIKE - name must be "~~*" or "!~~*" */
|
|
|
|
AEXPR_SIMILAR, /* [NOT] SIMILAR - name must be "~" or "!~" */
|
2015-02-22 19:57:56 +01:00
|
|
|
AEXPR_BETWEEN, /* name must be "BETWEEN" */
|
|
|
|
AEXPR_NOT_BETWEEN, /* name must be "NOT BETWEEN" */
|
|
|
|
AEXPR_BETWEEN_SYM, /* name must be "BETWEEN SYMMETRIC" */
|
Make operator precedence follow the SQL standard more closely.
While the SQL standard is pretty vague on the overall topic of operator
precedence (because it never presents a unified BNF for all expressions),
it does seem reasonable to conclude from the spec for <boolean value
expression> that OR has the lowest precedence, then AND, then NOT, then IS
tests, then the six standard comparison operators, then everything else
(since any non-boolean operator in a WHERE clause would need to be an
argument of one of these).
We were only sort of on board with that: most notably, while "<" ">" and
"=" had properly low precedence, "<=" ">=" and "<>" were treated as generic
operators and so had significantly higher precedence. And "IS" tests were
even higher precedence than those, which is very clearly wrong per spec.
Another problem was that "foo NOT SOMETHING bar" constructs, such as
"x NOT LIKE y", were treated inconsistently because of a bison
implementation artifact: they had the documented precedence with respect
to operators to their right, but behaved like NOT (i.e., very low priority)
with respect to operators to their left.
Fixing the precedence issues is just a small matter of rearranging the
precedence declarations in gram.y, except for the NOT problem, which
requires adding an additional lookahead case in base_yylex() so that we
can attach a different token precedence to NOT LIKE and allied two-word
operators.
The bulk of this patch is not the bug fix per se, but adding logic to
parse_expr.c to allow giving warnings if an expression has changed meaning
because of these precedence changes. These warnings are off by default
and are enabled by the new GUC operator_precedence_warning. It's believed
that very few applications will be affected by these changes, but it was
agreed that a warning mechanism is essential to help debug any that are.
2015-03-11 18:22:52 +01:00
|
|
|
AEXPR_NOT_BETWEEN_SYM, /* name must be "NOT BETWEEN SYMMETRIC" */
|
|
|
|
AEXPR_PAREN /* nameless dummy node for parentheses */
|
2003-08-08 23:42:59 +02:00
|
|
|
} A_Expr_Kind;
|
2003-02-10 05:44:47 +01:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct A_Expr
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2003-08-04 02:43:34 +02:00
|
|
|
A_Expr_Kind kind; /* see above */
|
2002-04-17 01:08:12 +02:00
|
|
|
List *name; /* possibly-qualified name of operator */
|
2003-02-10 05:44:47 +01:00
|
|
|
Node *lexpr; /* left argument, or NULL if none */
|
|
|
|
Node *rexpr; /* right argument, or NULL if none */
|
2006-03-14 23:48:25 +01:00
|
|
|
int location; /* token location, or -1 if unknown */
|
2002-03-08 05:37:18 +01:00
|
|
|
} A_Expr;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
2008-04-29 22:44:49 +02:00
|
|
|
* A_Const - a literal constant
|
2002-03-08 05:37:18 +01:00
|
|
|
*/
|
|
|
|
typedef struct A_Const
|
2002-02-19 00:11:58 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2008-04-29 22:44:49 +02:00
|
|
|
Value val; /* value (includes type info, see value.h) */
|
2008-08-29 01:09:48 +02:00
|
|
|
int location; /* token location, or -1 if unknown */
|
2002-03-08 05:37:18 +01:00
|
|
|
} A_Const;
|
2002-02-19 00:11:58 +01:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
|
|
|
* TypeCast - a CAST expression
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct TypeCast
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-08 05:37:18 +01:00
|
|
|
Node *arg; /* the expression being casted */
|
2009-07-16 08:33:46 +02:00
|
|
|
TypeName *typeName; /* the target type */
|
2008-08-29 01:09:48 +02:00
|
|
|
int location; /* token location, or -1 if unknown */
|
2002-03-08 05:37:18 +01:00
|
|
|
} TypeCast;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2011-03-11 22:27:51 +01:00
|
|
|
/*
|
|
|
|
* CollateClause - a COLLATE expression
|
|
|
|
*/
|
|
|
|
typedef struct CollateClause
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
Node *arg; /* input expression */
|
|
|
|
List *collname; /* possibly-qualified collation name */
|
|
|
|
int location; /* token location, or -1 if unknown */
|
|
|
|
} CollateClause;
|
|
|
|
|
Allow CURRENT/SESSION_USER to be used in certain commands
Commands such as ALTER USER, ALTER GROUP, ALTER ROLE, GRANT, and the
various ALTER OBJECT / OWNER TO, as well as ad-hoc clauses related to
roles such as the AUTHORIZATION clause of CREATE SCHEMA, the FOR clause
of CREATE USER MAPPING, and the FOR ROLE clause of ALTER DEFAULT
PRIVILEGES can now take the keywords CURRENT_USER and SESSION_USER as
user specifiers in place of an explicit user name.
This commit also fixes some quite ugly handling of special standards-
mandated syntax in CREATE USER MAPPING, which in particular would fail
to work in presence of a role named "current_user".
The special role specifiers PUBLIC and NONE also have more consistent
handling now.
Also take the opportunity to add location tracking to user specifiers.
Authors: Kyotaro Horiguchi. Heavily reworked by Álvaro Herrera.
Reviewed by: Rushabh Lathia, Adam Brightwell, Marti Raudsepp.
2015-03-09 19:41:54 +01:00
|
|
|
/*
|
|
|
|
* RoleSpec - a role name or one of a few special values.
|
|
|
|
*/
|
|
|
|
typedef enum RoleSpecType
|
|
|
|
{
|
2015-05-24 03:35:49 +02:00
|
|
|
ROLESPEC_CSTRING, /* role name is stored as a C string */
|
|
|
|
ROLESPEC_CURRENT_USER, /* role spec is CURRENT_USER */
|
|
|
|
ROLESPEC_SESSION_USER, /* role spec is SESSION_USER */
|
|
|
|
ROLESPEC_PUBLIC /* role name is "public" */
|
Allow CURRENT/SESSION_USER to be used in certain commands
Commands such as ALTER USER, ALTER GROUP, ALTER ROLE, GRANT, and the
various ALTER OBJECT / OWNER TO, as well as ad-hoc clauses related to
roles such as the AUTHORIZATION clause of CREATE SCHEMA, the FOR clause
of CREATE USER MAPPING, and the FOR ROLE clause of ALTER DEFAULT
PRIVILEGES can now take the keywords CURRENT_USER and SESSION_USER as
user specifiers in place of an explicit user name.
This commit also fixes some quite ugly handling of special standards-
mandated syntax in CREATE USER MAPPING, which in particular would fail
to work in presence of a role named "current_user".
The special role specifiers PUBLIC and NONE also have more consistent
handling now.
Also take the opportunity to add location tracking to user specifiers.
Authors: Kyotaro Horiguchi. Heavily reworked by Álvaro Herrera.
Reviewed by: Rushabh Lathia, Adam Brightwell, Marti Raudsepp.
2015-03-09 19:41:54 +01:00
|
|
|
} RoleSpecType;
|
|
|
|
|
|
|
|
typedef struct RoleSpec
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2015-05-24 03:35:49 +02:00
|
|
|
RoleSpecType roletype; /* Type of this rolespec */
|
|
|
|
char *rolename; /* filled only for ROLESPEC_CSTRING */
|
|
|
|
int location; /* token location, or -1 if unknown */
|
Allow CURRENT/SESSION_USER to be used in certain commands
Commands such as ALTER USER, ALTER GROUP, ALTER ROLE, GRANT, and the
various ALTER OBJECT / OWNER TO, as well as ad-hoc clauses related to
roles such as the AUTHORIZATION clause of CREATE SCHEMA, the FOR clause
of CREATE USER MAPPING, and the FOR ROLE clause of ALTER DEFAULT
PRIVILEGES can now take the keywords CURRENT_USER and SESSION_USER as
user specifiers in place of an explicit user name.
This commit also fixes some quite ugly handling of special standards-
mandated syntax in CREATE USER MAPPING, which in particular would fail
to work in presence of a role named "current_user".
The special role specifiers PUBLIC and NONE also have more consistent
handling now.
Also take the opportunity to add location tracking to user specifiers.
Authors: Kyotaro Horiguchi. Heavily reworked by Álvaro Herrera.
Reviewed by: Rushabh Lathia, Adam Brightwell, Marti Raudsepp.
2015-03-09 19:41:54 +01:00
|
|
|
} RoleSpec;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
|
|
|
* FuncCall - a function or aggregate invocation
|
|
|
|
*
|
Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
|
|
|
* agg_order (if not NIL) indicates we saw 'foo(... ORDER BY ...)', or if
|
|
|
|
* agg_within_group is true, it was 'foo(...) WITHIN GROUP (ORDER BY ...)'.
|
2002-03-08 05:37:18 +01:00
|
|
|
* agg_star indicates we saw a 'foo(*)' construct, while agg_distinct
|
2009-12-15 18:57:48 +01:00
|
|
|
* indicates we saw 'foo(DISTINCT ...)'. In any of these cases, the
|
|
|
|
* construct *must* be an aggregate call. Otherwise, it might be either an
|
2013-07-17 02:15:36 +02:00
|
|
|
* aggregate or some other kind of function. However, if FILTER or OVER is
|
|
|
|
* present it had better be an aggregate or window function.
|
2013-07-01 20:41:33 +02:00
|
|
|
*
|
Support multi-argument UNNEST(), and TABLE() syntax for multiple functions.
This patch adds the ability to write TABLE( function1(), function2(), ...)
as a single FROM-clause entry. The result is the concatenation of the
first row from each function, followed by the second row from each
function, etc; with NULLs inserted if any function produces fewer rows than
others. This is believed to be a much more useful behavior than what
Postgres currently does with multiple SRFs in a SELECT list.
This syntax also provides a reasonable way to combine use of column
definition lists with WITH ORDINALITY: put the column definition list
inside TABLE(), where it's clear that it doesn't control the ordinality
column as well.
Also implement SQL-compliant multiple-argument UNNEST(), by turning
UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
The SQL standard specifies TABLE() with only a single function, not
multiple functions, and it seems to require an implicit UNNEST() which is
not what this patch does. There may be something wrong with that reading
of the spec, though, because if it's right then the spec's TABLE() is just
a pointless alternative spelling of UNNEST(). After further review of
that, we might choose to adopt a different syntax for what this patch does,
but in any case this functionality seems clearly worthwhile.
Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
significantly revised by me
2013-11-22 01:37:02 +01:00
|
|
|
* Normally, you'd initialize this via makeFuncCall() and then only change the
|
|
|
|
* parts of the struct its defaults don't match afterwards, as needed.
|
2002-03-08 05:37:18 +01:00
|
|
|
*/
|
|
|
|
typedef struct FuncCall
|
1997-10-28 16:11:45 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2002-04-09 22:35:55 +02:00
|
|
|
List *funcname; /* qualified name of function */
|
2002-03-08 05:37:18 +01:00
|
|
|
List *args; /* the arguments (list of exprs) */
|
2010-02-26 03:01:40 +01:00
|
|
|
List *agg_order; /* ORDER BY (list of SortBy) */
|
2013-07-17 02:15:36 +02:00
|
|
|
Node *agg_filter; /* FILTER clause, if any */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
bool agg_within_group; /* ORDER BY appeared in WITHIN GROUP */
|
2002-03-08 05:37:18 +01:00
|
|
|
bool agg_star; /* argument was really '*' */
|
|
|
|
bool agg_distinct; /* arguments were labeled DISTINCT */
|
2008-07-16 03:30:23 +02:00
|
|
|
bool func_variadic; /* last argument was labeled VARIADIC */
|
2008-12-28 19:54:01 +01:00
|
|
|
struct WindowDef *over; /* OVER clause, if any */
|
2006-03-14 23:48:25 +01:00
|
|
|
int location; /* token location, or -1 if unknown */
|
2002-03-08 05:37:18 +01:00
|
|
|
} FuncCall;
|
1997-08-31 13:43:09 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
2008-08-30 03:39:14 +02:00
|
|
|
* A_Star - '*' representing all columns of a table or compound field
|
|
|
|
*
|
|
|
|
* This can appear within ColumnRef.fields, A_Indirection.indirection, and
|
|
|
|
* ResTarget.indirection lists.
|
|
|
|
*/
|
|
|
|
typedef struct A_Star
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
} A_Star;
|
|
|
|
|
|
|
|
/*
|
2015-12-23 03:05:16 +01:00
|
|
|
* A_Indices - array subscript or slice bounds ([idx] or [lidx:uidx])
|
|
|
|
*
|
|
|
|
* In slice case, either or both of lidx and uidx can be NULL (omitted).
|
|
|
|
* In non-slice case, uidx holds the single subscript and lidx is always NULL.
|
1997-12-04 01:28:15 +01:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct A_Indices
|
1997-12-04 01:28:15 +01:00
|
|
|
{
|
1998-02-26 05:46:47 +01:00
|
|
|
NodeTag type;
|
2015-12-23 03:05:16 +01:00
|
|
|
bool is_slice; /* true if slice (i.e., colon present) */
|
|
|
|
Node *lidx; /* slice lower bound, if any */
|
|
|
|
Node *uidx; /* subscript, or slice upper bound if any */
|
2002-03-08 05:37:18 +01:00
|
|
|
} A_Indices;
|
1997-12-04 01:28:15 +01:00
|
|
|
|
2002-03-21 17:02:16 +01:00
|
|
|
/*
|
2004-06-09 21:08:20 +02:00
|
|
|
* A_Indirection - select a field and/or array element from an expression
|
2002-03-21 17:02:16 +01:00
|
|
|
*
|
2008-08-30 03:39:14 +02:00
|
|
|
* The indirection list can contain A_Indices nodes (representing
|
|
|
|
* subscripting), string Value nodes (representing field selection --- the
|
|
|
|
* string value is the name of the field to select), and A_Star nodes
|
|
|
|
* (representing selection of all fields of a composite type).
|
|
|
|
* For example, a complex selection operation like
|
2004-06-09 21:08:20 +02:00
|
|
|
* (foo).field1[42][7].field2
|
|
|
|
* would be represented with a single A_Indirection node having a 4-element
|
|
|
|
* indirection list.
|
|
|
|
*
|
2008-08-30 03:39:14 +02:00
|
|
|
* Currently, A_Star must appear only as the last list element --- the grammar
|
|
|
|
* is responsible for enforcing this!
|
2002-03-21 17:02:16 +01:00
|
|
|
*/
|
2004-06-09 21:08:20 +02:00
|
|
|
typedef struct A_Indirection
|
2002-03-21 17:02:16 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
Node *arg; /* the thing being selected from */
|
2008-08-30 03:39:14 +02:00
|
|
|
List *indirection; /* subscripts and/or field names and/or * */
|
2004-06-09 21:08:20 +02:00
|
|
|
} A_Indirection;
|
2002-03-21 17:02:16 +01:00
|
|
|
|
2008-03-20 22:42:48 +01:00
|
|
|
/*
|
|
|
|
* A_ArrayExpr - an ARRAY[] construct
|
|
|
|
*/
|
|
|
|
typedef struct A_ArrayExpr
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *elements; /* array element expressions */
|
2008-08-29 01:09:48 +02:00
|
|
|
int location; /* token location, or -1 if unknown */
|
2008-03-20 22:42:48 +01:00
|
|
|
} A_ArrayExpr;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
|
|
|
* ResTarget -
|
2004-06-09 21:08:20 +02:00
|
|
|
* result target (used in target list of pre-transformed parse trees)
|
2002-03-08 05:37:18 +01:00
|
|
|
*
|
2006-08-02 03:59:48 +02:00
|
|
|
* In a SELECT target list, 'name' is the column label from an
|
2004-06-09 21:08:20 +02:00
|
|
|
* 'AS ColumnLabel' clause, or NULL if there was none, and 'val' is the
|
|
|
|
* value expression itself. The 'indirection' field is not used.
|
2002-03-08 05:37:18 +01:00
|
|
|
*
|
2006-08-02 03:59:48 +02:00
|
|
|
* INSERT uses ResTarget in its target-column-names list. Here, 'name' is
|
|
|
|
* the name of the destination column, 'indirection' stores any subscripts
|
|
|
|
* attached to the destination, and 'val' is not used.
|
2004-06-09 21:08:20 +02:00
|
|
|
*
|
|
|
|
* In an UPDATE target list, 'name' is the name of the destination column,
|
|
|
|
* 'indirection' stores any subscripts attached to the destination, and
|
|
|
|
* 'val' is the expression to assign.
|
|
|
|
*
|
|
|
|
* See A_Indirection for more info about what can appear in 'indirection'.
|
2002-03-08 05:37:18 +01:00
|
|
|
*/
|
|
|
|
typedef struct ResTarget
|
2000-01-14 23:11:38 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2002-03-08 05:37:18 +01:00
|
|
|
char *name; /* column name or NULL */
|
2008-08-30 03:39:14 +02:00
|
|
|
List *indirection; /* subscripts, field names, and '*', or NIL */
|
2005-10-15 04:49:52 +02:00
|
|
|
Node *val; /* the value expression to compute or assign */
|
2006-03-23 01:19:30 +01:00
|
|
|
int location; /* token location, or -1 if unknown */
|
2002-03-08 05:37:18 +01:00
|
|
|
} ResTarget;
|
1997-12-04 01:28:15 +01:00
|
|
|
|
Implement UPDATE tab SET (col1,col2,...) = (SELECT ...), ...
This SQL-standard feature allows a sub-SELECT yielding multiple columns
(but only one row) to be used to compute the new values of several columns
to be updated. While the same results can be had with an independent
sub-SELECT per column, such a workaround can require a great deal of
duplicated computation.
The standard actually says that the source for a multi-column assignment
could be any row-valued expression. The implementation used here is
tightly tied to our existing sub-SELECT support and can't handle other
cases; the Bison grammar would have some issues with them too. However,
I don't feel too bad about this since other cases can be converted into
sub-SELECTs. For instance, "SET (a,b,c) = row_valued_function(x)" could
be written "SET (a,b,c) = (SELECT * FROM row_valued_function(x))".
2014-06-18 19:22:25 +02:00
|
|
|
/*
|
|
|
|
* MultiAssignRef - element of a row source expression for UPDATE
|
|
|
|
*
|
|
|
|
* In an UPDATE target list, when we have SET (a,b,c) = row-valued-expression,
|
|
|
|
* we generate separate ResTarget items for each of a,b,c. Their "val" trees
|
|
|
|
* are MultiAssignRef nodes numbered 1..n, linking to a common copy of the
|
|
|
|
* row-valued-expression (which parse analysis will process only once, when
|
|
|
|
* handling the MultiAssignRef with colno=1).
|
|
|
|
*/
|
|
|
|
typedef struct MultiAssignRef
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
Node *source; /* the row-valued expression */
|
|
|
|
int colno; /* column number for this target (1..n) */
|
|
|
|
int ncolumns; /* number of targets in the construct */
|
|
|
|
} MultiAssignRef;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
2003-08-17 21:58:06 +02:00
|
|
|
* SortBy - for ORDER BY clause
|
2002-03-08 05:37:18 +01:00
|
|
|
*/
|
2003-08-17 21:58:06 +02:00
|
|
|
typedef struct SortBy
|
2002-03-01 23:45:19 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2008-09-01 22:42:46 +02:00
|
|
|
Node *node; /* expression to sort on */
|
|
|
|
SortByDir sortby_dir; /* ASC/DESC/USING/default */
|
2007-11-15 22:14:46 +01:00
|
|
|
SortByNulls sortby_nulls; /* NULLS FIRST/LAST */
|
2003-08-17 21:58:06 +02:00
|
|
|
List *useOp; /* name of op to use, if SORTBY_USING */
|
2008-09-01 22:42:46 +02:00
|
|
|
int location; /* operator location, or -1 if none/unknown */
|
2003-08-17 21:58:06 +02:00
|
|
|
} SortBy;
|
2002-03-01 23:45:19 +01:00
|
|
|
|
2008-12-28 19:54:01 +01:00
|
|
|
/*
|
|
|
|
* WindowDef - raw representation of WINDOW and OVER clauses
|
2008-12-31 01:08:39 +01:00
|
|
|
*
|
|
|
|
* For entries in a WINDOW list, "name" is the window name being defined.
|
|
|
|
* For OVER clauses, we use "name" for the "OVER window" syntax, or "refname"
|
|
|
|
* for the "OVER (window)" syntax, which is subtly different --- the latter
|
|
|
|
* implies overriding the window frame clause.
|
2008-12-28 19:54:01 +01:00
|
|
|
*/
|
|
|
|
typedef struct WindowDef
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2009-06-11 16:49:15 +02:00
|
|
|
char *name; /* window's own name */
|
|
|
|
char *refname; /* referenced window name, if any */
|
2008-12-28 19:54:01 +01:00
|
|
|
List *partitionClause; /* PARTITION BY expression list */
|
2009-06-11 16:49:15 +02:00
|
|
|
List *orderClause; /* ORDER BY (list of SortBy) */
|
|
|
|
int frameOptions; /* frame_clause options, see below */
|
2010-02-12 18:33:21 +01:00
|
|
|
Node *startOffset; /* expression for starting bound, if any */
|
|
|
|
Node *endOffset; /* expression for ending bound, if any */
|
2009-06-11 16:49:15 +02:00
|
|
|
int location; /* parse location, or -1 if none/unknown */
|
2008-12-28 19:54:01 +01:00
|
|
|
} WindowDef;
|
|
|
|
|
2008-12-31 01:08:39 +01:00
|
|
|
/*
|
|
|
|
* frameOptions is an OR of these bits. The NONDEFAULT and BETWEEN bits are
|
|
|
|
* used so that ruleutils.c can tell which properties were specified and
|
|
|
|
* which were defaulted; the correct behavioral bits must be set either way.
|
|
|
|
* The START_foo and END_foo options must come in pairs of adjacent bits for
|
|
|
|
* the convenience of gram.y, even though some of them are useless/invalid.
|
|
|
|
*/
|
2009-06-11 16:49:15 +02:00
|
|
|
#define FRAMEOPTION_NONDEFAULT 0x00001 /* any specified? */
|
|
|
|
#define FRAMEOPTION_RANGE 0x00002 /* RANGE behavior */
|
|
|
|
#define FRAMEOPTION_ROWS 0x00004 /* ROWS behavior */
|
Support all SQL:2011 options for window frame clauses.
This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING"
frame boundaries in window functions. We'd punted on that back in the
original patch to add window functions, because it was not clear how to
do it in a reasonably data-type-extensible fashion. That problem is
resolved here by adding the ability for btree operator classes to provide
an "in_range" support function that defines how to add or subtract the
RANGE offset value. Factoring it this way also allows the operator class
to avoid overflow problems near the ends of the datatype's range, if it
wishes to expend effort on that. (In the committed patch, the integer
opclasses handle that issue, but it did not seem worth the trouble to
avoid overflow failures for datetime types.)
The patch includes in_range support for the integer_ops opfamily
(int2/int4/int8) as well as the standard datetime types. Support for
other numeric types has been requested, but that seems like suitable
material for a follow-on patch.
In addition, the patch adds GROUPS mode which counts the offset in
ORDER-BY peer groups rather than rows, and it adds the frame_exclusion
options specified by SQL:2011. As far as I can see, we are now fully
up to spec on window framing options.
Existing behaviors remain unchanged, except that I changed the errcode
for a couple of existing error reports to meet the SQL spec's expectation
that negative "offset" values should be reported as SQLSTATE 22013.
Internally and in relevant parts of the documentation, we now consistently
use the terminology "offset PRECEDING/FOLLOWING" rather than "value
PRECEDING/FOLLOWING", since the term "value" is confusingly vague.
Oliver Ford, reviewed and whacked around some by me
Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com
2018-02-07 06:06:50 +01:00
|
|
|
#define FRAMEOPTION_GROUPS 0x00008 /* GROUPS behavior */
|
|
|
|
#define FRAMEOPTION_BETWEEN 0x00010 /* BETWEEN given? */
|
|
|
|
#define FRAMEOPTION_START_UNBOUNDED_PRECEDING 0x00020 /* start is U. P. */
|
|
|
|
#define FRAMEOPTION_END_UNBOUNDED_PRECEDING 0x00040 /* (disallowed) */
|
|
|
|
#define FRAMEOPTION_START_UNBOUNDED_FOLLOWING 0x00080 /* (disallowed) */
|
|
|
|
#define FRAMEOPTION_END_UNBOUNDED_FOLLOWING 0x00100 /* end is U. F. */
|
|
|
|
#define FRAMEOPTION_START_CURRENT_ROW 0x00200 /* start is C. R. */
|
|
|
|
#define FRAMEOPTION_END_CURRENT_ROW 0x00400 /* end is C. R. */
|
|
|
|
#define FRAMEOPTION_START_OFFSET_PRECEDING 0x00800 /* start is O. P. */
|
|
|
|
#define FRAMEOPTION_END_OFFSET_PRECEDING 0x01000 /* end is O. P. */
|
|
|
|
#define FRAMEOPTION_START_OFFSET_FOLLOWING 0x02000 /* start is O. F. */
|
|
|
|
#define FRAMEOPTION_END_OFFSET_FOLLOWING 0x04000 /* end is O. F. */
|
|
|
|
#define FRAMEOPTION_EXCLUDE_CURRENT_ROW 0x08000 /* omit C.R. */
|
|
|
|
#define FRAMEOPTION_EXCLUDE_GROUP 0x10000 /* omit C.R. & peers */
|
|
|
|
#define FRAMEOPTION_EXCLUDE_TIES 0x20000 /* omit C.R.'s peers */
|
|
|
|
|
|
|
|
#define FRAMEOPTION_START_OFFSET \
|
|
|
|
(FRAMEOPTION_START_OFFSET_PRECEDING | FRAMEOPTION_START_OFFSET_FOLLOWING)
|
|
|
|
#define FRAMEOPTION_END_OFFSET \
|
|
|
|
(FRAMEOPTION_END_OFFSET_PRECEDING | FRAMEOPTION_END_OFFSET_FOLLOWING)
|
|
|
|
#define FRAMEOPTION_EXCLUSION \
|
|
|
|
(FRAMEOPTION_EXCLUDE_CURRENT_ROW | FRAMEOPTION_EXCLUDE_GROUP | \
|
|
|
|
FRAMEOPTION_EXCLUDE_TIES)
|
2008-12-31 01:08:39 +01:00
|
|
|
|
|
|
|
#define FRAMEOPTION_DEFAULTS \
|
|
|
|
(FRAMEOPTION_RANGE | FRAMEOPTION_START_UNBOUNDED_PRECEDING | \
|
|
|
|
FRAMEOPTION_END_CURRENT_ROW)
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
|
|
|
* RangeSubselect - subquery appearing in a FROM clause
|
1999-12-16 18:24:19 +01:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct RangeSubselect
|
1999-12-16 18:24:19 +01:00
|
|
|
{
|
2000-04-12 19:17:23 +02:00
|
|
|
NodeTag type;
|
2012-08-08 01:02:54 +02:00
|
|
|
bool lateral; /* does it have LATERAL prefix? */
|
2002-03-08 05:37:18 +01:00
|
|
|
Node *subquery; /* the untransformed sub-select clause */
|
2002-03-21 17:02:16 +01:00
|
|
|
Alias *alias; /* table alias & optional column aliases */
|
2002-03-08 05:37:18 +01:00
|
|
|
} RangeSubselect;
|
1999-12-16 18:24:19 +01:00
|
|
|
|
2002-05-12 22:10:05 +02:00
|
|
|
/*
|
|
|
|
* RangeFunction - function call appearing in a FROM clause
|
Support multi-argument UNNEST(), and TABLE() syntax for multiple functions.
This patch adds the ability to write TABLE( function1(), function2(), ...)
as a single FROM-clause entry. The result is the concatenation of the
first row from each function, followed by the second row from each
function, etc; with NULLs inserted if any function produces fewer rows than
others. This is believed to be a much more useful behavior than what
Postgres currently does with multiple SRFs in a SELECT list.
This syntax also provides a reasonable way to combine use of column
definition lists with WITH ORDINALITY: put the column definition list
inside TABLE(), where it's clear that it doesn't control the ordinality
column as well.
Also implement SQL-compliant multiple-argument UNNEST(), by turning
UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
The SQL standard specifies TABLE() with only a single function, not
multiple functions, and it seems to require an implicit UNNEST() which is
not what this patch does. There may be something wrong with that reading
of the spec, though, because if it's right then the spec's TABLE() is just
a pointless alternative spelling of UNNEST(). After further review of
that, we might choose to adopt a different syntax for what this patch does,
but in any case this functionality seems clearly worthwhile.
Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
significantly revised by me
2013-11-22 01:37:02 +01:00
|
|
|
*
|
|
|
|
* functions is a List because we use this to represent the construct
|
2014-05-06 18:12:18 +02:00
|
|
|
* ROWS FROM(func1(...), func2(...), ...). Each element of this list is a
|
Support multi-argument UNNEST(), and TABLE() syntax for multiple functions.
This patch adds the ability to write TABLE( function1(), function2(), ...)
as a single FROM-clause entry. The result is the concatenation of the
first row from each function, followed by the second row from each
function, etc; with NULLs inserted if any function produces fewer rows than
others. This is believed to be a much more useful behavior than what
Postgres currently does with multiple SRFs in a SELECT list.
This syntax also provides a reasonable way to combine use of column
definition lists with WITH ORDINALITY: put the column definition list
inside TABLE(), where it's clear that it doesn't control the ordinality
column as well.
Also implement SQL-compliant multiple-argument UNNEST(), by turning
UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
The SQL standard specifies TABLE() with only a single function, not
multiple functions, and it seems to require an implicit UNNEST() which is
not what this patch does. There may be something wrong with that reading
of the spec, though, because if it's right then the spec's TABLE() is just
a pointless alternative spelling of UNNEST(). After further review of
that, we might choose to adopt a different syntax for what this patch does,
but in any case this functionality seems clearly worthwhile.
Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
significantly revised by me
2013-11-22 01:37:02 +01:00
|
|
|
* two-element sublist, the first element being the untransformed function
|
|
|
|
* call tree, and the second element being a possibly-empty list of ColumnDef
|
|
|
|
* nodes representing any columndef list attached to that function within the
|
2013-12-10 15:34:37 +01:00
|
|
|
* ROWS FROM() syntax.
|
Support multi-argument UNNEST(), and TABLE() syntax for multiple functions.
This patch adds the ability to write TABLE( function1(), function2(), ...)
as a single FROM-clause entry. The result is the concatenation of the
first row from each function, followed by the second row from each
function, etc; with NULLs inserted if any function produces fewer rows than
others. This is believed to be a much more useful behavior than what
Postgres currently does with multiple SRFs in a SELECT list.
This syntax also provides a reasonable way to combine use of column
definition lists with WITH ORDINALITY: put the column definition list
inside TABLE(), where it's clear that it doesn't control the ordinality
column as well.
Also implement SQL-compliant multiple-argument UNNEST(), by turning
UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
The SQL standard specifies TABLE() with only a single function, not
multiple functions, and it seems to require an implicit UNNEST() which is
not what this patch does. There may be something wrong with that reading
of the spec, though, because if it's right then the spec's TABLE() is just
a pointless alternative spelling of UNNEST(). After further review of
that, we might choose to adopt a different syntax for what this patch does,
but in any case this functionality seems clearly worthwhile.
Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
significantly revised by me
2013-11-22 01:37:02 +01:00
|
|
|
*
|
|
|
|
* alias and coldeflist represent any alias and/or columndef list attached
|
|
|
|
* at the top level. (We disallow coldeflist appearing both here and
|
|
|
|
* per-function, but that's checked in parse analysis, not by the grammar.)
|
2002-05-12 22:10:05 +02:00
|
|
|
*/
|
|
|
|
typedef struct RangeFunction
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2012-08-08 01:02:54 +02:00
|
|
|
bool lateral; /* does it have LATERAL prefix? */
|
2013-07-29 17:38:01 +02:00
|
|
|
bool ordinality; /* does it have WITH ORDINALITY suffix? */
|
2013-12-10 15:34:37 +01:00
|
|
|
bool is_rowsfrom; /* is result of ROWS FROM() syntax? */
|
Support multi-argument UNNEST(), and TABLE() syntax for multiple functions.
This patch adds the ability to write TABLE( function1(), function2(), ...)
as a single FROM-clause entry. The result is the concatenation of the
first row from each function, followed by the second row from each
function, etc; with NULLs inserted if any function produces fewer rows than
others. This is believed to be a much more useful behavior than what
Postgres currently does with multiple SRFs in a SELECT list.
This syntax also provides a reasonable way to combine use of column
definition lists with WITH ORDINALITY: put the column definition list
inside TABLE(), where it's clear that it doesn't control the ordinality
column as well.
Also implement SQL-compliant multiple-argument UNNEST(), by turning
UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
The SQL standard specifies TABLE() with only a single function, not
multiple functions, and it seems to require an implicit UNNEST() which is
not what this patch does. There may be something wrong with that reading
of the spec, though, because if it's right then the spec's TABLE() is just
a pointless alternative spelling of UNNEST(). After further review of
that, we might choose to adopt a different syntax for what this patch does,
but in any case this functionality seems clearly worthwhile.
Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
significantly revised by me
2013-11-22 01:37:02 +01:00
|
|
|
List *functions; /* per-function information, see above */
|
2002-05-12 22:10:05 +02:00
|
|
|
Alias *alias; /* table alias & optional column aliases */
|
2006-10-04 02:30:14 +02:00
|
|
|
List *coldeflist; /* list of ColumnDef nodes to describe result
|
|
|
|
* of function returning RECORD */
|
2002-05-12 22:10:05 +02:00
|
|
|
} RangeFunction;
|
|
|
|
|
2017-03-08 16:39:37 +01:00
|
|
|
/*
|
|
|
|
* RangeTableFunc - raw form of "table functions" such as XMLTABLE
|
|
|
|
*/
|
|
|
|
typedef struct RangeTableFunc
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
bool lateral; /* does it have LATERAL prefix? */
|
|
|
|
Node *docexpr; /* document expression */
|
|
|
|
Node *rowexpr; /* row generator expression */
|
|
|
|
List *namespaces; /* list of namespaces as ResTarget */
|
|
|
|
List *columns; /* list of RangeTableFuncCol */
|
|
|
|
Alias *alias; /* table alias & optional column aliases */
|
|
|
|
int location; /* token location, or -1 if unknown */
|
|
|
|
} RangeTableFunc;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* RangeTableFuncCol - one column in a RangeTableFunc->columns
|
|
|
|
*
|
|
|
|
* If for_ordinality is true (FOR ORDINALITY), then the column is an int4
|
|
|
|
* column and the rest of the fields are ignored.
|
|
|
|
*/
|
|
|
|
typedef struct RangeTableFuncCol
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *colname; /* name of generated column */
|
|
|
|
TypeName *typeName; /* type of generated column */
|
|
|
|
bool for_ordinality; /* does it have FOR ORDINALITY? */
|
|
|
|
bool is_not_null; /* does it have NOT NULL? */
|
|
|
|
Node *colexpr; /* column filter expression */
|
|
|
|
Node *coldefexpr; /* column default value expression */
|
|
|
|
int location; /* token location, or -1 if unknown */
|
|
|
|
} RangeTableFuncCol;
|
|
|
|
|
2015-05-15 20:37:10 +02:00
|
|
|
/*
|
Redesign tablesample method API, and do extensive code review.
The original implementation of TABLESAMPLE modeled the tablesample method
API on index access methods, which wasn't a good choice because, without
specialized DDL commands, there's no way to build an extension that can
implement a TSM. (Raw inserts into system catalogs are not an acceptable
thing to do, because we can't undo them during DROP EXTENSION, nor will
pg_upgrade behave sanely.) Instead adopt an API more like procedural
language handlers or foreign data wrappers, wherein the only SQL-level
support object needed is a single handler function identified by having
a special return type. This lets us get rid of the supporting catalog
altogether, so that no custom DDL support is needed for the feature.
Adjust the API so that it can support non-constant tablesample arguments
(the original coding assumed we could evaluate the argument expressions at
ExecInitSampleScan time, which is undesirable even if it weren't outright
unsafe), and discourage sampling methods from looking at invisible tuples.
Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable
within and across queries, as required by the SQL standard, and deal more
honestly with methods that can't support that requirement.
Make a full code-review pass over the tablesample additions, and fix
assorted bugs, omissions, infelicities, and cosmetic issues (such as
failure to put the added code stanzas in a consistent ordering).
Improve EXPLAIN's output of tablesample plans, too.
Back-patch to 9.5 so that we don't have to support the original API
in production.
2015-07-25 20:39:00 +02:00
|
|
|
* RangeTableSample - TABLESAMPLE appearing in a raw FROM clause
|
2015-05-15 20:37:10 +02:00
|
|
|
*
|
Redesign tablesample method API, and do extensive code review.
The original implementation of TABLESAMPLE modeled the tablesample method
API on index access methods, which wasn't a good choice because, without
specialized DDL commands, there's no way to build an extension that can
implement a TSM. (Raw inserts into system catalogs are not an acceptable
thing to do, because we can't undo them during DROP EXTENSION, nor will
pg_upgrade behave sanely.) Instead adopt an API more like procedural
language handlers or foreign data wrappers, wherein the only SQL-level
support object needed is a single handler function identified by having
a special return type. This lets us get rid of the supporting catalog
altogether, so that no custom DDL support is needed for the feature.
Adjust the API so that it can support non-constant tablesample arguments
(the original coding assumed we could evaluate the argument expressions at
ExecInitSampleScan time, which is undesirable even if it weren't outright
unsafe), and discourage sampling methods from looking at invisible tuples.
Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable
within and across queries, as required by the SQL standard, and deal more
honestly with methods that can't support that requirement.
Make a full code-review pass over the tablesample additions, and fix
assorted bugs, omissions, infelicities, and cosmetic issues (such as
failure to put the added code stanzas in a consistent ordering).
Improve EXPLAIN's output of tablesample plans, too.
Back-patch to 9.5 so that we don't have to support the original API
in production.
2015-07-25 20:39:00 +02:00
|
|
|
* This node, appearing only in raw parse trees, represents
|
|
|
|
* <relation> TABLESAMPLE <method> (<params>) REPEATABLE (<num>)
|
|
|
|
* Currently, the <relation> can only be a RangeVar, but we might in future
|
|
|
|
* allow RangeSubselect and other options. Note that the RangeTableSample
|
|
|
|
* is wrapped around the node representing the <relation>, rather than being
|
|
|
|
* a subfield of it.
|
2015-05-15 20:37:10 +02:00
|
|
|
*/
|
|
|
|
typedef struct RangeTableSample
|
|
|
|
{
|
|
|
|
NodeTag type;
|
Redesign tablesample method API, and do extensive code review.
The original implementation of TABLESAMPLE modeled the tablesample method
API on index access methods, which wasn't a good choice because, without
specialized DDL commands, there's no way to build an extension that can
implement a TSM. (Raw inserts into system catalogs are not an acceptable
thing to do, because we can't undo them during DROP EXTENSION, nor will
pg_upgrade behave sanely.) Instead adopt an API more like procedural
language handlers or foreign data wrappers, wherein the only SQL-level
support object needed is a single handler function identified by having
a special return type. This lets us get rid of the supporting catalog
altogether, so that no custom DDL support is needed for the feature.
Adjust the API so that it can support non-constant tablesample arguments
(the original coding assumed we could evaluate the argument expressions at
ExecInitSampleScan time, which is undesirable even if it weren't outright
unsafe), and discourage sampling methods from looking at invisible tuples.
Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable
within and across queries, as required by the SQL standard, and deal more
honestly with methods that can't support that requirement.
Make a full code-review pass over the tablesample additions, and fix
assorted bugs, omissions, infelicities, and cosmetic issues (such as
failure to put the added code stanzas in a consistent ordering).
Improve EXPLAIN's output of tablesample plans, too.
Back-patch to 9.5 so that we don't have to support the original API
in production.
2015-07-25 20:39:00 +02:00
|
|
|
Node *relation; /* relation to be sampled */
|
|
|
|
List *method; /* sampling method name (possibly qualified) */
|
|
|
|
List *args; /* argument(s) for sampling method */
|
|
|
|
Node *repeatable; /* REPEATABLE expression, or NULL if none */
|
|
|
|
int location; /* method name location, or -1 if unknown */
|
2015-05-15 20:37:10 +02:00
|
|
|
} RangeTableSample;
|
|
|
|
|
2002-12-12 16:49:42 +01:00
|
|
|
/*
|
|
|
|
* ColumnDef - column definition (used in various creates)
|
|
|
|
*
|
|
|
|
* If the column has a default value, we may have the value expression
|
|
|
|
* in either "raw" form (an untransformed parse tree) or "cooked" form
|
2009-10-06 02:55:26 +02:00
|
|
|
* (a post-parse-analysis, executable expression tree), depending on
|
|
|
|
* how this ColumnDef node was created (by parsing, or by inheritance
|
2014-05-06 18:12:18 +02:00
|
|
|
* from an existing relation). We should never have both in the same node!
|
2002-12-12 16:49:42 +01:00
|
|
|
*
|
Remove collation information from TypeName, where it does not belong.
The initial collations patch treated a COLLATE spec as part of a TypeName,
following what can only be described as brain fade on the part of the SQL
committee. It's a lot more reasonable to treat COLLATE as a syntactically
separate object, so that it can be added in only the productions where it
actually belongs, rather than needing to reject it in a boatload of places
where it doesn't belong (something the original patch mostly failed to do).
In addition this change lets us meet the spec's requirement to allow
COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly
behavior for constructs such as "foo::type COLLATE collation".
To do this, pull collation information out of TypeName and put it in
ColumnDef instead, thus reverting most of the collation-related changes in
parse_type.c's API. I made one additional structural change, which was to
use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd
nodes. This provides enough room to get rid of the "transform" wart in
AlterTableCmd too, since the ColumnDef can carry the USING expression
easily enough.
Also fix some other minor bugs that have crept in in the same areas,
like failure to copy recently-added fields of ColumnDef in copyfuncs.c.
While at it, document the formerly secret ability to specify a collation
in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and
ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about
what the default collation selection will be when COLLATE is omitted.
BTW, the three-parameter form of format_type() should go away too,
since it just contributes to the confusion in this area; but I'll do
that in a separate patch.
2011-03-10 04:38:52 +01:00
|
|
|
* Similarly, we may have a COLLATE specification in either raw form
|
|
|
|
* (represented as a CollateClause with arg==NULL) or cooked form
|
|
|
|
* (the collation's OID).
|
|
|
|
*
|
2002-12-12 16:49:42 +01:00
|
|
|
* The constraints list may contain a CONSTR_DEFAULT item in a raw
|
|
|
|
* parsetree produced by gram.y, but transformCreateStmt will remove
|
|
|
|
* the item and set raw_default instead. CONSTR_DEFAULT items
|
|
|
|
* should not appear in any subsequent processing.
|
|
|
|
*/
|
|
|
|
typedef struct ColumnDef
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *colname; /* name of column */
|
2009-07-16 08:33:46 +02:00
|
|
|
TypeName *typeName; /* type of column */
|
2002-12-12 16:49:42 +01:00
|
|
|
int inhcount; /* number of times column is inherited */
|
|
|
|
bool is_local; /* column has local (non-inherited) def'n */
|
|
|
|
bool is_not_null; /* NOT NULL constraint specified? */
|
2010-01-29 00:21:13 +01:00
|
|
|
bool is_from_type; /* column definition came from table type */
|
2009-10-13 02:53:08 +02:00
|
|
|
char storage; /* attstorage setting, or 0 for default */
|
2005-10-15 04:49:52 +02:00
|
|
|
Node *raw_default; /* default value (untransformed parse tree) */
|
2009-10-06 02:55:26 +02:00
|
|
|
Node *cooked_default; /* default value (transformed expr tree) */
|
2017-04-06 14:33:16 +02:00
|
|
|
char identity; /* attidentity setting */
|
2018-04-26 20:47:16 +02:00
|
|
|
RangeVar *identitySequence; /* to store identity sequence name for
|
|
|
|
* ALTER TABLE ... ADD COLUMN */
|
2019-03-30 08:13:09 +01:00
|
|
|
char generated; /* attgenerated setting */
|
Remove collation information from TypeName, where it does not belong.
The initial collations patch treated a COLLATE spec as part of a TypeName,
following what can only be described as brain fade on the part of the SQL
committee. It's a lot more reasonable to treat COLLATE as a syntactically
separate object, so that it can be added in only the productions where it
actually belongs, rather than needing to reject it in a boatload of places
where it doesn't belong (something the original patch mostly failed to do).
In addition this change lets us meet the spec's requirement to allow
COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly
behavior for constructs such as "foo::type COLLATE collation".
To do this, pull collation information out of TypeName and put it in
ColumnDef instead, thus reverting most of the collation-related changes in
parse_type.c's API. I made one additional structural change, which was to
use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd
nodes. This provides enough room to get rid of the "transform" wart in
AlterTableCmd too, since the ColumnDef can carry the USING expression
easily enough.
Also fix some other minor bugs that have crept in in the same areas,
like failure to copy recently-added fields of ColumnDef in copyfuncs.c.
While at it, document the formerly secret ability to specify a collation
in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and
ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about
what the default collation selection will be when COLLATE is omitted.
BTW, the three-parameter form of format_type() should go away too,
since it just contributes to the confusion in this area; but I'll do
that in a separate patch.
2011-03-10 04:38:52 +01:00
|
|
|
CollateClause *collClause; /* untransformed COLLATE spec, if any */
|
|
|
|
Oid collOid; /* collation OID (InvalidOid if not set) */
|
2002-12-12 16:49:42 +01:00
|
|
|
List *constraints; /* other constraints on column */
|
2011-08-05 19:24:03 +02:00
|
|
|
List *fdwoptions; /* per-column FDW options */
|
Support multi-argument UNNEST(), and TABLE() syntax for multiple functions.
This patch adds the ability to write TABLE( function1(), function2(), ...)
as a single FROM-clause entry. The result is the concatenation of the
first row from each function, followed by the second row from each
function, etc; with NULLs inserted if any function produces fewer rows than
others. This is believed to be a much more useful behavior than what
Postgres currently does with multiple SRFs in a SELECT list.
This syntax also provides a reasonable way to combine use of column
definition lists with WITH ORDINALITY: put the column definition list
inside TABLE(), where it's clear that it doesn't control the ordinality
column as well.
Also implement SQL-compliant multiple-argument UNNEST(), by turning
UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
The SQL standard specifies TABLE() with only a single function, not
multiple functions, and it seems to require an implicit UNNEST() which is
not what this patch does. There may be something wrong with that reading
of the spec, though, because if it's right then the spec's TABLE() is just
a pointless alternative spelling of UNNEST(). After further review of
that, we might choose to adopt a different syntax for what this patch does,
but in any case this functionality seems clearly worthwhile.
Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
significantly revised by me
2013-11-22 01:37:02 +01:00
|
|
|
int location; /* parse location, or -1 if none/unknown */
|
2002-12-12 16:49:42 +01:00
|
|
|
} ColumnDef;
|
|
|
|
|
2003-06-25 05:40:19 +02:00
|
|
|
/*
|
2012-01-07 13:58:13 +01:00
|
|
|
* TableLikeClause - CREATE TABLE ( ... LIKE ... ) clause
|
2003-06-25 05:40:19 +02:00
|
|
|
*/
|
2012-01-07 13:58:13 +01:00
|
|
|
typedef struct TableLikeClause
|
2003-06-25 05:40:19 +02:00
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
RangeVar *relation;
|
2012-01-07 13:58:13 +01:00
|
|
|
bits32 options; /* OR of TableLikeOption flags */
|
|
|
|
} TableLikeClause;
|
2003-06-25 05:40:19 +02:00
|
|
|
|
2012-01-07 13:58:13 +01:00
|
|
|
typedef enum TableLikeOption
|
2006-10-11 18:42:59 +02:00
|
|
|
{
|
Clone extended stats in CREATE TABLE (LIKE INCLUDING ALL)
The LIKE INCLUDING ALL clause to CREATE TABLE intuitively indicates
cloning of extended statistics on the source table, but it failed to do
so. Patch it up so that it does. Also include an INCLUDING STATISTICS
option to the LIKE clause, so that the behavior can be requested
individually, or excluded individually.
While at it, reorder the INCLUDING options, both in code and in docs, in
alphabetical order which makes more sense than feature-implementation
order that was previously used.
Backpatch this to Postgres 10, where extended statistics were
introduced, because this is seen as an oversight in a fresh feature
which is better to get consistent from the get-go instead of changing
only in pg11.
In pg11, comments on statistics objects are cloned too. In pg10 they
are not, because I (Álvaro) was too coward to change the parse node as
required to support it. Also, in pg10 I chose not to renumber the
parser symbols for the various INCLUDING options in LIKE, for the same
reason. Any corresponding user-visible changes (docs) are backpatched,
though.
Reported-by: Stephen Froehlich
Author: David Rowley
Reviewed-by: Álvaro Herrera, Tomas Vondra
Discussion: https://postgr.es/m/CY1PR0601MB1927315B45667A1B679D0FD5E5EF0@CY1PR0601MB1927.namprd06.prod.outlook.com
2018-03-05 23:37:19 +01:00
|
|
|
CREATE_TABLE_LIKE_COMMENTS = 1 << 0,
|
2010-02-26 03:01:40 +01:00
|
|
|
CREATE_TABLE_LIKE_CONSTRAINTS = 1 << 1,
|
Clone extended stats in CREATE TABLE (LIKE INCLUDING ALL)
The LIKE INCLUDING ALL clause to CREATE TABLE intuitively indicates
cloning of extended statistics on the source table, but it failed to do
so. Patch it up so that it does. Also include an INCLUDING STATISTICS
option to the LIKE clause, so that the behavior can be requested
individually, or excluded individually.
While at it, reorder the INCLUDING options, both in code and in docs, in
alphabetical order which makes more sense than feature-implementation
order that was previously used.
Backpatch this to Postgres 10, where extended statistics were
introduced, because this is seen as an oversight in a fresh feature
which is better to get consistent from the get-go instead of changing
only in pg11.
In pg11, comments on statistics objects are cloned too. In pg10 they
are not, because I (Álvaro) was too coward to change the parse node as
required to support it. Also, in pg10 I chose not to renumber the
parser symbols for the various INCLUDING options in LIKE, for the same
reason. Any corresponding user-visible changes (docs) are backpatched,
though.
Reported-by: Stephen Froehlich
Author: David Rowley
Reviewed-by: Álvaro Herrera, Tomas Vondra
Discussion: https://postgr.es/m/CY1PR0601MB1927315B45667A1B679D0FD5E5EF0@CY1PR0601MB1927.namprd06.prod.outlook.com
2018-03-05 23:37:19 +01:00
|
|
|
CREATE_TABLE_LIKE_DEFAULTS = 1 << 2,
|
2019-03-30 08:13:09 +01:00
|
|
|
CREATE_TABLE_LIKE_GENERATED = 1 << 3,
|
|
|
|
CREATE_TABLE_LIKE_IDENTITY = 1 << 4,
|
|
|
|
CREATE_TABLE_LIKE_INDEXES = 1 << 5,
|
|
|
|
CREATE_TABLE_LIKE_STATISTICS = 1 << 6,
|
|
|
|
CREATE_TABLE_LIKE_STORAGE = 1 << 7,
|
2015-04-02 17:43:35 +02:00
|
|
|
CREATE_TABLE_LIKE_ALL = PG_INT32_MAX
|
2012-01-07 13:58:13 +01:00
|
|
|
} TableLikeOption;
|
2006-10-11 18:42:59 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
* IndexElem - index parameters (used in CREATE INDEX, and in ON CONFLICT)
|
2002-03-08 05:37:18 +01:00
|
|
|
*
|
2003-05-28 18:04:02 +02:00
|
|
|
* For a plain index attribute, 'name' is the name of the table column to
|
|
|
|
* index, and 'expr' is NULL. For an index expression, 'name' is NULL and
|
|
|
|
* 'expr' is the expression tree.
|
2002-03-08 05:37:18 +01:00
|
|
|
*/
|
|
|
|
typedef struct IndexElem
|
1999-12-16 18:24:19 +01:00
|
|
|
{
|
2000-04-12 19:17:23 +02:00
|
|
|
NodeTag type;
|
2002-04-09 22:35:55 +02:00
|
|
|
char *name; /* name of attribute to index, or NULL */
|
2003-05-28 18:04:02 +02:00
|
|
|
Node *expr; /* expression to index, or NULL */
|
Adjust naming of indexes and their columns per recent discussion.
Index expression columns are now named after the FigureColname result for
their expressions, rather than always being "pg_expression_N". Digits are
appended to this name if needed to make the column name unique within the
index. (That happens for regular columns too, thus fixing the old problem
that CREATE INDEX fooi ON foo (f1, f1) fails. Before exclusion indexes
there was no real reason to do such a thing, but now maybe there is.)
Default names for indexes and associated constraints now include the column
names of all their columns, not only the first one as in previous practice.
(Of course, this will be truncated as needed to fit in NAMEDATALEN. Also,
pkey indexes retain the historical behavior of not naming specific columns
at all.)
An example of the results:
regression=# create table foo (f1 int, f2 text,
regression(# exclude (f1 with =, lower(f2) with =));
NOTICE: CREATE TABLE / EXCLUDE will create implicit index "foo_f1_lower_exclusion" for table "foo"
CREATE TABLE
regression=# \d foo_f1_lower_exclusion
Index "public.foo_f1_lower_exclusion"
Column | Type | Definition
--------+---------+------------
f1 | integer | f1
lower | text | lower(f2)
btree, for table "public.foo"
2009-12-23 03:35:25 +01:00
|
|
|
char *indexcolname; /* name for index column; NULL = default */
|
2011-02-08 22:04:18 +01:00
|
|
|
List *collation; /* name of collation; NIL = default */
|
2002-04-17 22:57:57 +02:00
|
|
|
List *opclass; /* name of desired opclass; NIL = default */
|
2007-01-09 03:14:16 +01:00
|
|
|
SortByDir ordering; /* ASC/DESC/default */
|
2007-11-15 22:14:46 +01:00
|
|
|
SortByNulls nulls_ordering; /* FIRST/LAST/default */
|
2002-03-08 05:37:18 +01:00
|
|
|
} IndexElem;
|
1999-12-16 18:24:19 +01:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
2009-04-04 23:12:31 +02:00
|
|
|
* DefElem - a generic "name = value" option definition
|
|
|
|
*
|
2014-05-06 18:12:18 +02:00
|
|
|
* In some contexts the name can be qualified. Also, certain SQL commands
|
2009-04-04 23:12:31 +02:00
|
|
|
* allow a SET/ADD/DROP action to be attached to option settings, so it's
|
|
|
|
* convenient to carry a field for that too. (Note: currently, it is our
|
|
|
|
* practice that the grammar allows namespace and action only in statements
|
|
|
|
* where they are relevant; C code can just ignore those fields in other
|
|
|
|
* statements.)
|
2002-03-08 05:37:18 +01:00
|
|
|
*/
|
2009-04-04 23:12:31 +02:00
|
|
|
typedef enum DefElemAction
|
|
|
|
{
|
|
|
|
DEFELEM_UNSPEC, /* no action given */
|
|
|
|
DEFELEM_SET,
|
|
|
|
DEFELEM_ADD,
|
|
|
|
DEFELEM_DROP
|
|
|
|
} DefElemAction;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct DefElem
|
1999-12-16 18:24:19 +01:00
|
|
|
{
|
2000-04-12 19:17:23 +02:00
|
|
|
NodeTag type;
|
2009-04-04 23:12:31 +02:00
|
|
|
char *defnamespace; /* NULL if unqualified name */
|
2002-03-08 05:37:18 +01:00
|
|
|
char *defname;
|
|
|
|
Node *arg; /* a (Value *) or a (TypeName *) */
|
2009-04-04 23:12:31 +02:00
|
|
|
DefElemAction defaction; /* unspecified action, or SET/ADD/DROP */
|
2016-09-06 18:00:00 +02:00
|
|
|
int location; /* token location, or -1 if unknown */
|
2002-03-08 05:37:18 +01:00
|
|
|
} DefElem;
|
1999-12-16 18:24:19 +01:00
|
|
|
|
2005-08-01 22:31:16 +02:00
|
|
|
/*
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
* LockingClause - raw representation of FOR [NO KEY] UPDATE/[KEY] SHARE
|
2013-05-29 22:58:43 +02:00
|
|
|
* options
|
2005-08-01 22:31:16 +02:00
|
|
|
*
|
2014-05-06 18:12:18 +02:00
|
|
|
* Note: lockedRels == NIL means "all relations in query". Otherwise it
|
2008-09-01 22:42:46 +02:00
|
|
|
* is a list of RangeVar nodes. (We use RangeVar mainly because it carries
|
|
|
|
* a location field --- currently, parse analysis insists on unqualified
|
|
|
|
* names in LockingClause.)
|
2005-08-01 22:31:16 +02:00
|
|
|
*/
|
|
|
|
typedef struct LockingClause
|
|
|
|
{
|
|
|
|
NodeTag type;
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
List *lockedRels; /* FOR [KEY] UPDATE/SHARE relations */
|
|
|
|
LockClauseStrength strength;
|
2015-05-24 03:35:49 +02:00
|
|
|
LockWaitPolicy waitPolicy; /* NOWAIT and SKIP LOCKED */
|
2005-08-01 22:31:16 +02:00
|
|
|
} LockingClause;
|
|
|
|
|
2007-02-03 15:06:56 +01:00
|
|
|
/*
|
2008-08-29 01:09:48 +02:00
|
|
|
* XMLSERIALIZE (in raw parse tree only)
|
2007-02-03 15:06:56 +01:00
|
|
|
*/
|
|
|
|
typedef struct XmlSerialize
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2008-08-29 01:09:48 +02:00
|
|
|
XmlOptionType xmloption; /* DOCUMENT or CONTENT */
|
2007-02-03 15:06:56 +01:00
|
|
|
Node *expr;
|
2009-07-16 08:33:46 +02:00
|
|
|
TypeName *typeName;
|
2008-08-29 01:09:48 +02:00
|
|
|
int location; /* token location, or -1 if unknown */
|
2007-11-15 23:25:18 +01:00
|
|
|
} XmlSerialize;
|
2007-02-03 15:06:56 +01:00
|
|
|
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
/* Partitioning related definitions */
|
|
|
|
|
|
|
|
/*
|
Code review focused on new node types added by partitioning support.
Fix failure to check that we got a plain Const from const-simplification of
a coercion request. This is the cause of bug #14666 from Tian Bing: there
is an int4 to money cast, but it's only stable not immutable (because of
dependence on lc_monetary), resulting in a FuncExpr that the code was
miserably unequipped to deal with, or indeed even to notice that it was
failing to deal with. Add test cases around this coercion behavior.
In view of the above, sprinkle the code liberally with castNode() macros,
in hope of catching the next such bug a bit sooner. Also, change some
functions that were randomly declared to take Node* to take more specific
pointer types. And change some struct fields that were declared Node*
but could be given more specific types, allowing removal of assorted
explicit casts.
Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting.
Likewise check only-one-key-for-list-partitioning restriction in a less
random place.
Avoid not-per-project-style usages like !strcmp(...).
Fix assorted failures to avoid scribbling on the input of parse
transformation. I'm not sure how necessary this is, but it's entirely
silly for these functions to be expending cycles to avoid that and not
getting it right.
Add guards against partitioning on system columns.
Put backend/nodes/ support code into an order that matches handling
of these node types elsewhere.
Annotate the fact that somebody added location fields to PartitionBoundSpec
and PartitionRangeDatum but forgot to handle them in
outfuncs.c/readfuncs.c. This is fairly harmless for production purposes
(since readfuncs.c would just substitute -1 anyway) but it's still bogus.
It's not worth forcing a post-beta1 initdb just to fix this, but if we
have another reason to force initdb before 10.0, we should go back and
clean this up.
Contrariwise, somebody added location fields to PartitionElem and
PartitionSpec but forgot to teach exprLocation() about them.
Consolidate duplicative code in transformPartitionBound().
Improve a couple of error messages.
Improve assorted commentary.
Re-pgindent the files touched by this patch; this affects a few comment
blocks that must have been added quite recently.
Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
|
|
|
* PartitionElem - parse-time representation of a single partition key
|
|
|
|
*
|
|
|
|
* expr can be either a raw expression tree or a parse-analyzed expression.
|
|
|
|
* We don't store these on-disk, though.
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
*/
|
|
|
|
typedef struct PartitionElem
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2017-01-24 16:20:02 +01:00
|
|
|
char *name; /* name of column to partition on, or NULL */
|
|
|
|
Node *expr; /* expression to partition on, or NULL */
|
|
|
|
List *collation; /* name of collation; NIL = default */
|
|
|
|
List *opclass; /* name of desired opclass; NIL = default */
|
|
|
|
int location; /* token location, or -1 if unknown */
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
} PartitionElem;
|
|
|
|
|
|
|
|
/*
|
Code review focused on new node types added by partitioning support.
Fix failure to check that we got a plain Const from const-simplification of
a coercion request. This is the cause of bug #14666 from Tian Bing: there
is an int4 to money cast, but it's only stable not immutable (because of
dependence on lc_monetary), resulting in a FuncExpr that the code was
miserably unequipped to deal with, or indeed even to notice that it was
failing to deal with. Add test cases around this coercion behavior.
In view of the above, sprinkle the code liberally with castNode() macros,
in hope of catching the next such bug a bit sooner. Also, change some
functions that were randomly declared to take Node* to take more specific
pointer types. And change some struct fields that were declared Node*
but could be given more specific types, allowing removal of assorted
explicit casts.
Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting.
Likewise check only-one-key-for-list-partitioning restriction in a less
random place.
Avoid not-per-project-style usages like !strcmp(...).
Fix assorted failures to avoid scribbling on the input of parse
transformation. I'm not sure how necessary this is, but it's entirely
silly for these functions to be expending cycles to avoid that and not
getting it right.
Add guards against partitioning on system columns.
Put backend/nodes/ support code into an order that matches handling
of these node types elsewhere.
Annotate the fact that somebody added location fields to PartitionBoundSpec
and PartitionRangeDatum but forgot to handle them in
outfuncs.c/readfuncs.c. This is fairly harmless for production purposes
(since readfuncs.c would just substitute -1 anyway) but it's still bogus.
It's not worth forcing a post-beta1 initdb just to fix this, but if we
have another reason to force initdb before 10.0, we should go back and
clean this up.
Contrariwise, somebody added location fields to PartitionElem and
PartitionSpec but forgot to teach exprLocation() about them.
Consolidate duplicative code in transformPartitionBound().
Improve a couple of error messages.
Improve assorted commentary.
Re-pgindent the files touched by this patch; this affects a few comment
blocks that must have been added quite recently.
Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
|
|
|
* PartitionSpec - parse-time representation of a partition key specification
|
|
|
|
*
|
|
|
|
* This represents the key space we will be partitioning on.
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
*/
|
|
|
|
typedef struct PartitionSpec
|
|
|
|
{
|
|
|
|
NodeTag type;
|
Add hash partitioning.
Hash partitioning is useful when you want to partition a growing data
set evenly. This can be useful to keep table sizes reasonable, which
makes maintenance operations such as VACUUM faster, or to enable
partition-wise join.
At present, we still depend on constraint exclusion for partitioning
pruning, and the shape of the partition constraints for hash
partitioning is such that that doesn't work. Work is underway to fix
that, which should both improve performance and make partitioning
pruning work with hash partitioning.
Amul Sul, reviewed and tested by Dilip Kumar, Ashutosh Bapat, Yugo
Nagata, Rajkumar Raghuwanshi, Jesper Pedersen, and by me. A few
final tweaks also by me.
Discussion: http://postgr.es/m/CAAJ_b96fhpJAP=ALbETmeLk1Uni_GFZD938zgenhF49qgDTjaQ@mail.gmail.com
2017-11-10 00:07:25 +01:00
|
|
|
char *strategy; /* partitioning strategy ('hash', 'list' or
|
|
|
|
* 'range') */
|
2017-01-24 16:20:02 +01:00
|
|
|
List *partParams; /* List of PartitionElems */
|
|
|
|
int location; /* token location, or -1 if unknown */
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
} PartitionSpec;
|
|
|
|
|
Code review focused on new node types added by partitioning support.
Fix failure to check that we got a plain Const from const-simplification of
a coercion request. This is the cause of bug #14666 from Tian Bing: there
is an int4 to money cast, but it's only stable not immutable (because of
dependence on lc_monetary), resulting in a FuncExpr that the code was
miserably unequipped to deal with, or indeed even to notice that it was
failing to deal with. Add test cases around this coercion behavior.
In view of the above, sprinkle the code liberally with castNode() macros,
in hope of catching the next such bug a bit sooner. Also, change some
functions that were randomly declared to take Node* to take more specific
pointer types. And change some struct fields that were declared Node*
but could be given more specific types, allowing removal of assorted
explicit casts.
Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting.
Likewise check only-one-key-for-list-partitioning restriction in a less
random place.
Avoid not-per-project-style usages like !strcmp(...).
Fix assorted failures to avoid scribbling on the input of parse
transformation. I'm not sure how necessary this is, but it's entirely
silly for these functions to be expending cycles to avoid that and not
getting it right.
Add guards against partitioning on system columns.
Put backend/nodes/ support code into an order that matches handling
of these node types elsewhere.
Annotate the fact that somebody added location fields to PartitionBoundSpec
and PartitionRangeDatum but forgot to handle them in
outfuncs.c/readfuncs.c. This is fairly harmless for production purposes
(since readfuncs.c would just substitute -1 anyway) but it's still bogus.
It's not worth forcing a post-beta1 initdb just to fix this, but if we
have another reason to force initdb before 10.0, we should go back and
clean this up.
Contrariwise, somebody added location fields to PartitionElem and
PartitionSpec but forgot to teach exprLocation() about them.
Consolidate duplicative code in transformPartitionBound().
Improve a couple of error messages.
Improve assorted commentary.
Re-pgindent the files touched by this patch; this affects a few comment
blocks that must have been added quite recently.
Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
|
|
|
/* Internal codes for partitioning strategies */
|
Add hash partitioning.
Hash partitioning is useful when you want to partition a growing data
set evenly. This can be useful to keep table sizes reasonable, which
makes maintenance operations such as VACUUM faster, or to enable
partition-wise join.
At present, we still depend on constraint exclusion for partitioning
pruning, and the shape of the partition constraints for hash
partitioning is such that that doesn't work. Work is underway to fix
that, which should both improve performance and make partitioning
pruning work with hash partitioning.
Amul Sul, reviewed and tested by Dilip Kumar, Ashutosh Bapat, Yugo
Nagata, Rajkumar Raghuwanshi, Jesper Pedersen, and by me. A few
final tweaks also by me.
Discussion: http://postgr.es/m/CAAJ_b96fhpJAP=ALbETmeLk1Uni_GFZD938zgenhF49qgDTjaQ@mail.gmail.com
2017-11-10 00:07:25 +01:00
|
|
|
#define PARTITION_STRATEGY_HASH 'h'
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
#define PARTITION_STRATEGY_LIST 'l'
|
|
|
|
#define PARTITION_STRATEGY_RANGE 'r'
|
|
|
|
|
|
|
|
/*
|
|
|
|
* PartitionBoundSpec - a partition bound specification
|
Code review focused on new node types added by partitioning support.
Fix failure to check that we got a plain Const from const-simplification of
a coercion request. This is the cause of bug #14666 from Tian Bing: there
is an int4 to money cast, but it's only stable not immutable (because of
dependence on lc_monetary), resulting in a FuncExpr that the code was
miserably unequipped to deal with, or indeed even to notice that it was
failing to deal with. Add test cases around this coercion behavior.
In view of the above, sprinkle the code liberally with castNode() macros,
in hope of catching the next such bug a bit sooner. Also, change some
functions that were randomly declared to take Node* to take more specific
pointer types. And change some struct fields that were declared Node*
but could be given more specific types, allowing removal of assorted
explicit casts.
Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting.
Likewise check only-one-key-for-list-partitioning restriction in a less
random place.
Avoid not-per-project-style usages like !strcmp(...).
Fix assorted failures to avoid scribbling on the input of parse
transformation. I'm not sure how necessary this is, but it's entirely
silly for these functions to be expending cycles to avoid that and not
getting it right.
Add guards against partitioning on system columns.
Put backend/nodes/ support code into an order that matches handling
of these node types elsewhere.
Annotate the fact that somebody added location fields to PartitionBoundSpec
and PartitionRangeDatum but forgot to handle them in
outfuncs.c/readfuncs.c. This is fairly harmless for production purposes
(since readfuncs.c would just substitute -1 anyway) but it's still bogus.
It's not worth forcing a post-beta1 initdb just to fix this, but if we
have another reason to force initdb before 10.0, we should go back and
clean this up.
Contrariwise, somebody added location fields to PartitionElem and
PartitionSpec but forgot to teach exprLocation() about them.
Consolidate duplicative code in transformPartitionBound().
Improve a couple of error messages.
Improve assorted commentary.
Re-pgindent the files touched by this patch; this affects a few comment
blocks that must have been added quite recently.
Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
|
|
|
*
|
|
|
|
* This represents the portion of the partition key space assigned to a
|
|
|
|
* particular partition. These are stored on disk in pg_class.relpartbound.
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
*/
|
2018-04-15 02:12:14 +02:00
|
|
|
struct PartitionBoundSpec
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
|
Code review focused on new node types added by partitioning support.
Fix failure to check that we got a plain Const from const-simplification of
a coercion request. This is the cause of bug #14666 from Tian Bing: there
is an int4 to money cast, but it's only stable not immutable (because of
dependence on lc_monetary), resulting in a FuncExpr that the code was
miserably unequipped to deal with, or indeed even to notice that it was
failing to deal with. Add test cases around this coercion behavior.
In view of the above, sprinkle the code liberally with castNode() macros,
in hope of catching the next such bug a bit sooner. Also, change some
functions that were randomly declared to take Node* to take more specific
pointer types. And change some struct fields that were declared Node*
but could be given more specific types, allowing removal of assorted
explicit casts.
Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting.
Likewise check only-one-key-for-list-partitioning restriction in a less
random place.
Avoid not-per-project-style usages like !strcmp(...).
Fix assorted failures to avoid scribbling on the input of parse
transformation. I'm not sure how necessary this is, but it's entirely
silly for these functions to be expending cycles to avoid that and not
getting it right.
Add guards against partitioning on system columns.
Put backend/nodes/ support code into an order that matches handling
of these node types elsewhere.
Annotate the fact that somebody added location fields to PartitionBoundSpec
and PartitionRangeDatum but forgot to handle them in
outfuncs.c/readfuncs.c. This is fairly harmless for production purposes
(since readfuncs.c would just substitute -1 anyway) but it's still bogus.
It's not worth forcing a post-beta1 initdb just to fix this, but if we
have another reason to force initdb before 10.0, we should go back and
clean this up.
Contrariwise, somebody added location fields to PartitionElem and
PartitionSpec but forgot to teach exprLocation() about them.
Consolidate duplicative code in transformPartitionBound().
Improve a couple of error messages.
Improve assorted commentary.
Re-pgindent the files touched by this patch; this affects a few comment
blocks that must have been added quite recently.
Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
|
|
|
char strategy; /* see PARTITION_STRATEGY codes above */
|
Allow a partitioned table to have a default partition.
Any tuples that don't route to any other partition will route to the
default partition.
Jeevan Ladhe, Beena Emerson, Ashutosh Bapat, Rahila Syed, and Robert
Haas, with review and testing at various stages by (at least) Rushabh
Lathia, Keith Fiske, Amit Langote, Amul Sul, Rajkumar Raghuanshi, Sven
Kunze, Kyotaro Horiguchi, Thom Brown, Rafia Sabih, and Dilip Kumar.
Discussion: http://postgr.es/m/CAH2L28tbN4SYyhS7YV1YBWcitkqbhSWfQCy0G=apRcC_PEO-bg@mail.gmail.com
Discussion: http://postgr.es/m/CAOG9ApEYj34fWMcvBMBQ-YtqR9fTdXhdN82QEKG0SVZ6zeL1xg@mail.gmail.com
2017-09-08 23:28:04 +02:00
|
|
|
bool is_default; /* is it a default partition bound? */
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
|
Add hash partitioning.
Hash partitioning is useful when you want to partition a growing data
set evenly. This can be useful to keep table sizes reasonable, which
makes maintenance operations such as VACUUM faster, or to enable
partition-wise join.
At present, we still depend on constraint exclusion for partitioning
pruning, and the shape of the partition constraints for hash
partitioning is such that that doesn't work. Work is underway to fix
that, which should both improve performance and make partitioning
pruning work with hash partitioning.
Amul Sul, reviewed and tested by Dilip Kumar, Ashutosh Bapat, Yugo
Nagata, Rajkumar Raghuwanshi, Jesper Pedersen, and by me. A few
final tweaks also by me.
Discussion: http://postgr.es/m/CAAJ_b96fhpJAP=ALbETmeLk1Uni_GFZD938zgenhF49qgDTjaQ@mail.gmail.com
2017-11-10 00:07:25 +01:00
|
|
|
/* Partitioning info for HASH strategy: */
|
|
|
|
int modulus;
|
|
|
|
int remainder;
|
|
|
|
|
Code review focused on new node types added by partitioning support.
Fix failure to check that we got a plain Const from const-simplification of
a coercion request. This is the cause of bug #14666 from Tian Bing: there
is an int4 to money cast, but it's only stable not immutable (because of
dependence on lc_monetary), resulting in a FuncExpr that the code was
miserably unequipped to deal with, or indeed even to notice that it was
failing to deal with. Add test cases around this coercion behavior.
In view of the above, sprinkle the code liberally with castNode() macros,
in hope of catching the next such bug a bit sooner. Also, change some
functions that were randomly declared to take Node* to take more specific
pointer types. And change some struct fields that were declared Node*
but could be given more specific types, allowing removal of assorted
explicit casts.
Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting.
Likewise check only-one-key-for-list-partitioning restriction in a less
random place.
Avoid not-per-project-style usages like !strcmp(...).
Fix assorted failures to avoid scribbling on the input of parse
transformation. I'm not sure how necessary this is, but it's entirely
silly for these functions to be expending cycles to avoid that and not
getting it right.
Add guards against partitioning on system columns.
Put backend/nodes/ support code into an order that matches handling
of these node types elsewhere.
Annotate the fact that somebody added location fields to PartitionBoundSpec
and PartitionRangeDatum but forgot to handle them in
outfuncs.c/readfuncs.c. This is fairly harmless for production purposes
(since readfuncs.c would just substitute -1 anyway) but it's still bogus.
It's not worth forcing a post-beta1 initdb just to fix this, but if we
have another reason to force initdb before 10.0, we should go back and
clean this up.
Contrariwise, somebody added location fields to PartitionElem and
PartitionSpec but forgot to teach exprLocation() about them.
Consolidate duplicative code in transformPartitionBound().
Improve a couple of error messages.
Improve assorted commentary.
Re-pgindent the files touched by this patch; this affects a few comment
blocks that must have been added quite recently.
Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
|
|
|
/* Partitioning info for LIST strategy: */
|
|
|
|
List *listdatums; /* List of Consts (or A_Consts in raw tree) */
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
|
Code review focused on new node types added by partitioning support.
Fix failure to check that we got a plain Const from const-simplification of
a coercion request. This is the cause of bug #14666 from Tian Bing: there
is an int4 to money cast, but it's only stable not immutable (because of
dependence on lc_monetary), resulting in a FuncExpr that the code was
miserably unequipped to deal with, or indeed even to notice that it was
failing to deal with. Add test cases around this coercion behavior.
In view of the above, sprinkle the code liberally with castNode() macros,
in hope of catching the next such bug a bit sooner. Also, change some
functions that were randomly declared to take Node* to take more specific
pointer types. And change some struct fields that were declared Node*
but could be given more specific types, allowing removal of assorted
explicit casts.
Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting.
Likewise check only-one-key-for-list-partitioning restriction in a less
random place.
Avoid not-per-project-style usages like !strcmp(...).
Fix assorted failures to avoid scribbling on the input of parse
transformation. I'm not sure how necessary this is, but it's entirely
silly for these functions to be expending cycles to avoid that and not
getting it right.
Add guards against partitioning on system columns.
Put backend/nodes/ support code into an order that matches handling
of these node types elsewhere.
Annotate the fact that somebody added location fields to PartitionBoundSpec
and PartitionRangeDatum but forgot to handle them in
outfuncs.c/readfuncs.c. This is fairly harmless for production purposes
(since readfuncs.c would just substitute -1 anyway) but it's still bogus.
It's not worth forcing a post-beta1 initdb just to fix this, but if we
have another reason to force initdb before 10.0, we should go back and
clean this up.
Contrariwise, somebody added location fields to PartitionElem and
PartitionSpec but forgot to teach exprLocation() about them.
Consolidate duplicative code in transformPartitionBound().
Improve a couple of error messages.
Improve assorted commentary.
Re-pgindent the files touched by this patch; this affects a few comment
blocks that must have been added quite recently.
Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
|
|
|
/* Partitioning info for RANGE strategy: */
|
|
|
|
List *lowerdatums; /* List of PartitionRangeDatums */
|
|
|
|
List *upperdatums; /* List of PartitionRangeDatums */
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
|
Code review focused on new node types added by partitioning support.
Fix failure to check that we got a plain Const from const-simplification of
a coercion request. This is the cause of bug #14666 from Tian Bing: there
is an int4 to money cast, but it's only stable not immutable (because of
dependence on lc_monetary), resulting in a FuncExpr that the code was
miserably unequipped to deal with, or indeed even to notice that it was
failing to deal with. Add test cases around this coercion behavior.
In view of the above, sprinkle the code liberally with castNode() macros,
in hope of catching the next such bug a bit sooner. Also, change some
functions that were randomly declared to take Node* to take more specific
pointer types. And change some struct fields that were declared Node*
but could be given more specific types, allowing removal of assorted
explicit casts.
Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting.
Likewise check only-one-key-for-list-partitioning restriction in a less
random place.
Avoid not-per-project-style usages like !strcmp(...).
Fix assorted failures to avoid scribbling on the input of parse
transformation. I'm not sure how necessary this is, but it's entirely
silly for these functions to be expending cycles to avoid that and not
getting it right.
Add guards against partitioning on system columns.
Put backend/nodes/ support code into an order that matches handling
of these node types elsewhere.
Annotate the fact that somebody added location fields to PartitionBoundSpec
and PartitionRangeDatum but forgot to handle them in
outfuncs.c/readfuncs.c. This is fairly harmless for production purposes
(since readfuncs.c would just substitute -1 anyway) but it's still bogus.
It's not worth forcing a post-beta1 initdb just to fix this, but if we
have another reason to force initdb before 10.0, we should go back and
clean this up.
Contrariwise, somebody added location fields to PartitionElem and
PartitionSpec but forgot to teach exprLocation() about them.
Consolidate duplicative code in transformPartitionBound().
Improve a couple of error messages.
Improve assorted commentary.
Re-pgindent the files touched by this patch; this affects a few comment
blocks that must have been added quite recently.
Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
|
|
|
int location; /* token location, or -1 if unknown */
|
2018-04-15 02:12:14 +02:00
|
|
|
};
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
|
|
|
|
/*
|
Use MINVALUE/MAXVALUE instead of UNBOUNDED for range partition bounds.
Previously, UNBOUNDED meant no lower bound when used in the FROM list,
and no upper bound when used in the TO list, which was OK for
single-column range partitioning, but problematic with multiple
columns. For example, an upper bound of (10.0, UNBOUNDED) would not be
collocated with a lower bound of (10.0, UNBOUNDED), thus making it
difficult or impossible to define contiguous multi-column range
partitions in some cases.
Fix this by using MINVALUE and MAXVALUE instead of UNBOUNDED to
represent a partition column that is unbounded below or above
respectively. This syntax removes any ambiguity, and ensures that if
one partition's lower bound equals another partition's upper bound,
then the partitions are contiguous.
Also drop the constraint prohibiting finite values after an unbounded
column, and just document the fact that any values after MINVALUE or
MAXVALUE are ignored. Previously it was necessary to repeat UNBOUNDED
multiple times, which was needlessly verbose.
Note: Forces a post-PG 10 beta2 initdb.
Report by Amul Sul, original patch by Amit Langote with some
additional hacking by me.
Discussion: https://postgr.es/m/CAAJ_b947mowpLdxL3jo3YLKngRjrq9+Ej4ymduQTfYR+8=YAYQ@mail.gmail.com
2017-07-21 10:20:47 +02:00
|
|
|
* PartitionRangeDatum - one of the values in a range partition bound
|
Code review focused on new node types added by partitioning support.
Fix failure to check that we got a plain Const from const-simplification of
a coercion request. This is the cause of bug #14666 from Tian Bing: there
is an int4 to money cast, but it's only stable not immutable (because of
dependence on lc_monetary), resulting in a FuncExpr that the code was
miserably unequipped to deal with, or indeed even to notice that it was
failing to deal with. Add test cases around this coercion behavior.
In view of the above, sprinkle the code liberally with castNode() macros,
in hope of catching the next such bug a bit sooner. Also, change some
functions that were randomly declared to take Node* to take more specific
pointer types. And change some struct fields that were declared Node*
but could be given more specific types, allowing removal of assorted
explicit casts.
Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting.
Likewise check only-one-key-for-list-partitioning restriction in a less
random place.
Avoid not-per-project-style usages like !strcmp(...).
Fix assorted failures to avoid scribbling on the input of parse
transformation. I'm not sure how necessary this is, but it's entirely
silly for these functions to be expending cycles to avoid that and not
getting it right.
Add guards against partitioning on system columns.
Put backend/nodes/ support code into an order that matches handling
of these node types elsewhere.
Annotate the fact that somebody added location fields to PartitionBoundSpec
and PartitionRangeDatum but forgot to handle them in
outfuncs.c/readfuncs.c. This is fairly harmless for production purposes
(since readfuncs.c would just substitute -1 anyway) but it's still bogus.
It's not worth forcing a post-beta1 initdb just to fix this, but if we
have another reason to force initdb before 10.0, we should go back and
clean this up.
Contrariwise, somebody added location fields to PartitionElem and
PartitionSpec but forgot to teach exprLocation() about them.
Consolidate duplicative code in transformPartitionBound().
Improve a couple of error messages.
Improve assorted commentary.
Re-pgindent the files touched by this patch; this affects a few comment
blocks that must have been added quite recently.
Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
|
|
|
*
|
Use MINVALUE/MAXVALUE instead of UNBOUNDED for range partition bounds.
Previously, UNBOUNDED meant no lower bound when used in the FROM list,
and no upper bound when used in the TO list, which was OK for
single-column range partitioning, but problematic with multiple
columns. For example, an upper bound of (10.0, UNBOUNDED) would not be
collocated with a lower bound of (10.0, UNBOUNDED), thus making it
difficult or impossible to define contiguous multi-column range
partitions in some cases.
Fix this by using MINVALUE and MAXVALUE instead of UNBOUNDED to
represent a partition column that is unbounded below or above
respectively. This syntax removes any ambiguity, and ensures that if
one partition's lower bound equals another partition's upper bound,
then the partitions are contiguous.
Also drop the constraint prohibiting finite values after an unbounded
column, and just document the fact that any values after MINVALUE or
MAXVALUE are ignored. Previously it was necessary to repeat UNBOUNDED
multiple times, which was needlessly verbose.
Note: Forces a post-PG 10 beta2 initdb.
Report by Amul Sul, original patch by Amit Langote with some
additional hacking by me.
Discussion: https://postgr.es/m/CAAJ_b947mowpLdxL3jo3YLKngRjrq9+Ej4ymduQTfYR+8=YAYQ@mail.gmail.com
2017-07-21 10:20:47 +02:00
|
|
|
* This can be MINVALUE, MAXVALUE or a specific bounded value.
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
*/
|
Use MINVALUE/MAXVALUE instead of UNBOUNDED for range partition bounds.
Previously, UNBOUNDED meant no lower bound when used in the FROM list,
and no upper bound when used in the TO list, which was OK for
single-column range partitioning, but problematic with multiple
columns. For example, an upper bound of (10.0, UNBOUNDED) would not be
collocated with a lower bound of (10.0, UNBOUNDED), thus making it
difficult or impossible to define contiguous multi-column range
partitions in some cases.
Fix this by using MINVALUE and MAXVALUE instead of UNBOUNDED to
represent a partition column that is unbounded below or above
respectively. This syntax removes any ambiguity, and ensures that if
one partition's lower bound equals another partition's upper bound,
then the partitions are contiguous.
Also drop the constraint prohibiting finite values after an unbounded
column, and just document the fact that any values after MINVALUE or
MAXVALUE are ignored. Previously it was necessary to repeat UNBOUNDED
multiple times, which was needlessly verbose.
Note: Forces a post-PG 10 beta2 initdb.
Report by Amul Sul, original patch by Amit Langote with some
additional hacking by me.
Discussion: https://postgr.es/m/CAAJ_b947mowpLdxL3jo3YLKngRjrq9+Ej4ymduQTfYR+8=YAYQ@mail.gmail.com
2017-07-21 10:20:47 +02:00
|
|
|
typedef enum PartitionRangeDatumKind
|
|
|
|
{
|
|
|
|
PARTITION_RANGE_DATUM_MINVALUE = -1, /* less than any other value */
|
|
|
|
PARTITION_RANGE_DATUM_VALUE = 0, /* a specific (bounded) value */
|
|
|
|
PARTITION_RANGE_DATUM_MAXVALUE = 1 /* greater than any other value */
|
|
|
|
} PartitionRangeDatumKind;
|
|
|
|
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
typedef struct PartitionRangeDatum
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
|
Use MINVALUE/MAXVALUE instead of UNBOUNDED for range partition bounds.
Previously, UNBOUNDED meant no lower bound when used in the FROM list,
and no upper bound when used in the TO list, which was OK for
single-column range partitioning, but problematic with multiple
columns. For example, an upper bound of (10.0, UNBOUNDED) would not be
collocated with a lower bound of (10.0, UNBOUNDED), thus making it
difficult or impossible to define contiguous multi-column range
partitions in some cases.
Fix this by using MINVALUE and MAXVALUE instead of UNBOUNDED to
represent a partition column that is unbounded below or above
respectively. This syntax removes any ambiguity, and ensures that if
one partition's lower bound equals another partition's upper bound,
then the partitions are contiguous.
Also drop the constraint prohibiting finite values after an unbounded
column, and just document the fact that any values after MINVALUE or
MAXVALUE are ignored. Previously it was necessary to repeat UNBOUNDED
multiple times, which was needlessly verbose.
Note: Forces a post-PG 10 beta2 initdb.
Report by Amul Sul, original patch by Amit Langote with some
additional hacking by me.
Discussion: https://postgr.es/m/CAAJ_b947mowpLdxL3jo3YLKngRjrq9+Ej4ymduQTfYR+8=YAYQ@mail.gmail.com
2017-07-21 10:20:47 +02:00
|
|
|
PartitionRangeDatumKind kind;
|
|
|
|
Node *value; /* Const (or A_Const in raw tree), if kind is
|
|
|
|
* PARTITION_RANGE_DATUM_VALUE, else NULL */
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
|
Code review focused on new node types added by partitioning support.
Fix failure to check that we got a plain Const from const-simplification of
a coercion request. This is the cause of bug #14666 from Tian Bing: there
is an int4 to money cast, but it's only stable not immutable (because of
dependence on lc_monetary), resulting in a FuncExpr that the code was
miserably unequipped to deal with, or indeed even to notice that it was
failing to deal with. Add test cases around this coercion behavior.
In view of the above, sprinkle the code liberally with castNode() macros,
in hope of catching the next such bug a bit sooner. Also, change some
functions that were randomly declared to take Node* to take more specific
pointer types. And change some struct fields that were declared Node*
but could be given more specific types, allowing removal of assorted
explicit casts.
Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting.
Likewise check only-one-key-for-list-partitioning restriction in a less
random place.
Avoid not-per-project-style usages like !strcmp(...).
Fix assorted failures to avoid scribbling on the input of parse
transformation. I'm not sure how necessary this is, but it's entirely
silly for these functions to be expending cycles to avoid that and not
getting it right.
Add guards against partitioning on system columns.
Put backend/nodes/ support code into an order that matches handling
of these node types elsewhere.
Annotate the fact that somebody added location fields to PartitionBoundSpec
and PartitionRangeDatum but forgot to handle them in
outfuncs.c/readfuncs.c. This is fairly harmless for production purposes
(since readfuncs.c would just substitute -1 anyway) but it's still bogus.
It's not worth forcing a post-beta1 initdb just to fix this, but if we
have another reason to force initdb before 10.0, we should go back and
clean this up.
Contrariwise, somebody added location fields to PartitionElem and
PartitionSpec but forgot to teach exprLocation() about them.
Consolidate duplicative code in transformPartitionBound().
Improve a couple of error messages.
Improve assorted commentary.
Re-pgindent the files touched by this patch; this affects a few comment
blocks that must have been added quite recently.
Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
|
|
|
int location; /* token location, or -1 if unknown */
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
} PartitionRangeDatum;
|
|
|
|
|
|
|
|
/*
|
Local partitioned indexes
When CREATE INDEX is run on a partitioned table, create catalog entries
for an index on the partitioned table (which is just a placeholder since
the table proper has no data of its own), and recurse to create actual
indexes on the existing partitions; create them in future partitions
also.
As a convenience gadget, if the new index definition matches some
existing index in partitions, these are picked up and used instead of
creating new ones. Whichever way these indexes come about, they become
attached to the index on the parent table and are dropped alongside it,
and cannot be dropped on isolation unless they are detached first.
To support pg_dump'ing these indexes, add commands
CREATE INDEX ON ONLY <table>
(which creates the index on the parent partitioned table, without
recursing) and
ALTER INDEX ATTACH PARTITION
(which is used after the indexes have been created individually on each
partition, to attach them to the parent index). These reconstruct prior
database state exactly.
Reviewed-by: (in alphabetical order) Peter Eisentraut, Robert Haas, Amit
Langote, Jesper Pedersen, Simon Riggs, David Rowley
Discussion: https://postgr.es/m/20171113170646.gzweigyrgg6pwsg4@alvherre.pgsql
2018-01-19 15:49:22 +01:00
|
|
|
* PartitionCmd - info for ALTER TABLE/INDEX ATTACH/DETACH PARTITION commands
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
*/
|
|
|
|
typedef struct PartitionCmd
|
|
|
|
{
|
|
|
|
NodeTag type;
|
Code review focused on new node types added by partitioning support.
Fix failure to check that we got a plain Const from const-simplification of
a coercion request. This is the cause of bug #14666 from Tian Bing: there
is an int4 to money cast, but it's only stable not immutable (because of
dependence on lc_monetary), resulting in a FuncExpr that the code was
miserably unequipped to deal with, or indeed even to notice that it was
failing to deal with. Add test cases around this coercion behavior.
In view of the above, sprinkle the code liberally with castNode() macros,
in hope of catching the next such bug a bit sooner. Also, change some
functions that were randomly declared to take Node* to take more specific
pointer types. And change some struct fields that were declared Node*
but could be given more specific types, allowing removal of assorted
explicit casts.
Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting.
Likewise check only-one-key-for-list-partitioning restriction in a less
random place.
Avoid not-per-project-style usages like !strcmp(...).
Fix assorted failures to avoid scribbling on the input of parse
transformation. I'm not sure how necessary this is, but it's entirely
silly for these functions to be expending cycles to avoid that and not
getting it right.
Add guards against partitioning on system columns.
Put backend/nodes/ support code into an order that matches handling
of these node types elsewhere.
Annotate the fact that somebody added location fields to PartitionBoundSpec
and PartitionRangeDatum but forgot to handle them in
outfuncs.c/readfuncs.c. This is fairly harmless for production purposes
(since readfuncs.c would just substitute -1 anyway) but it's still bogus.
It's not worth forcing a post-beta1 initdb just to fix this, but if we
have another reason to force initdb before 10.0, we should go back and
clean this up.
Contrariwise, somebody added location fields to PartitionElem and
PartitionSpec but forgot to teach exprLocation() about them.
Consolidate duplicative code in transformPartitionBound().
Improve a couple of error messages.
Improve assorted commentary.
Re-pgindent the files touched by this patch; this affects a few comment
blocks that must have been added quite recently.
Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
|
|
|
RangeVar *name; /* name of partition to attach/detach */
|
|
|
|
PartitionBoundSpec *bound; /* FOR VALUES, if attaching */
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
} PartitionCmd;
|
1997-04-02 05:34:46 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/****************************************************************************
|
|
|
|
* Nodes for a Query tree
|
|
|
|
****************************************************************************/
|
1997-04-02 05:34:46 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*--------------------
|
|
|
|
* RangeTblEntry -
|
|
|
|
* A range table is a List of RangeTblEntry nodes.
|
|
|
|
*
|
2002-03-12 01:52:10 +01:00
|
|
|
* A range table entry may represent a plain relation, a sub-select in
|
|
|
|
* FROM, or the result of a JOIN clause. (Only explicit JOIN syntax
|
|
|
|
* produces an RTE, not the implicit join resulting from multiple FROM
|
|
|
|
* items. This is because we only need the RTE to deal with SQL features
|
|
|
|
* like outer joins and join-output-column aliasing.) Other special
|
|
|
|
* RTE types also exist, as indicated by RTEKind.
|
2002-03-08 05:37:18 +01:00
|
|
|
*
|
2011-02-23 01:23:23 +01:00
|
|
|
* Note that we consider RTE_RELATION to cover anything that has a pg_class
|
|
|
|
* entry. relkind distinguishes the sub-cases.
|
|
|
|
*
|
2002-03-21 17:02:16 +01:00
|
|
|
* alias is an Alias node representing the AS alias-clause attached to the
|
2002-03-08 05:37:18 +01:00
|
|
|
* FROM expression, or NULL if no clause.
|
|
|
|
*
|
|
|
|
* eref is the table reference name and column reference names (either
|
|
|
|
* real or aliases). Note that system columns (OID etc) are not included
|
|
|
|
* in the column list.
|
2002-03-22 03:56:37 +01:00
|
|
|
* eref->aliasname is required to be present, and should generally be used
|
2002-03-08 05:37:18 +01:00
|
|
|
* to identify the RTE for error messages etc.
|
|
|
|
*
|
2004-08-19 22:57:41 +02:00
|
|
|
* In RELATION RTEs, the colnames in both alias and eref are indexed by
|
|
|
|
* physical attribute number; this means there must be colname entries for
|
2014-05-06 18:12:18 +02:00
|
|
|
* dropped columns. When building an RTE we insert empty strings ("") for
|
|
|
|
* dropped columns. Note however that a stored rule may have nonempty
|
2004-08-19 22:57:41 +02:00
|
|
|
* colnames for columns dropped since the rule was created (and for that
|
|
|
|
* matter the colnames might be out of date due to column renamings).
|
Support multi-argument UNNEST(), and TABLE() syntax for multiple functions.
This patch adds the ability to write TABLE( function1(), function2(), ...)
as a single FROM-clause entry. The result is the concatenation of the
first row from each function, followed by the second row from each
function, etc; with NULLs inserted if any function produces fewer rows than
others. This is believed to be a much more useful behavior than what
Postgres currently does with multiple SRFs in a SELECT list.
This syntax also provides a reasonable way to combine use of column
definition lists with WITH ORDINALITY: put the column definition list
inside TABLE(), where it's clear that it doesn't control the ordinality
column as well.
Also implement SQL-compliant multiple-argument UNNEST(), by turning
UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
The SQL standard specifies TABLE() with only a single function, not
multiple functions, and it seems to require an implicit UNNEST() which is
not what this patch does. There may be something wrong with that reading
of the spec, though, because if it's right then the spec's TABLE() is just
a pointless alternative spelling of UNNEST(). After further review of
that, we might choose to adopt a different syntax for what this patch does,
but in any case this functionality seems clearly worthwhile.
Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
significantly revised by me
2013-11-22 01:37:02 +01:00
|
|
|
* The same comments apply to FUNCTION RTEs when a function's return type
|
|
|
|
* is a named composite type.
|
2004-08-19 22:57:41 +02:00
|
|
|
*
|
|
|
|
* In JOIN RTEs, the colnames in both alias and eref are one-to-one with
|
|
|
|
* joinaliasvars entries. A JOIN RTE will omit columns of its inputs when
|
2014-05-06 18:12:18 +02:00
|
|
|
* those columns are known to be dropped at parse time. Again, however,
|
2004-08-19 22:57:41 +02:00
|
|
|
* a stored rule might contain entries for columns dropped since the rule
|
2014-05-06 18:12:18 +02:00
|
|
|
* was created. (This is only possible for columns not actually referenced
|
2005-06-04 01:05:30 +02:00
|
|
|
* in the rule.) When loading a stored rule, we replace the joinaliasvars
|
Change post-rewriter representation of dropped columns in joinaliasvars.
It's possible to drop a column from an input table of a JOIN clause in a
view, if that column is nowhere actually referenced in the view. But it
will still be there in the JOIN clause's joinaliasvars list. We used to
replace such entries with NULL Const nodes, which is handy for generation
of RowExpr expansion of a whole-row reference to the view. The trouble
with that is that it can't be distinguished from the situation after
subquery pull-up of a constant subquery output expression below the JOIN.
Instead, replace such joinaliasvars with null pointers (empty expression
trees), which can't be confused with pulled-up expressions. expandRTE()
still emits the old convention, though, for convenience of RowExpr
generation and to reduce the risk of breaking extension code.
In HEAD and 9.3, this patch also fixes a problem with some new code in
ruleutils.c that was failing to cope with implicitly-casted joinaliasvars
entries, as per recent report from Feike Steenbergen. That oversight was
because of an inadequate description of the data structure in parsenodes.h,
which I've now corrected. There were some pre-existing oversights of the
same ilk elsewhere, which I believe are now all fixed.
2013-07-23 22:23:01 +02:00
|
|
|
* items for any such columns with null pointers. (We can't simply delete
|
2005-06-04 01:05:30 +02:00
|
|
|
* them from the joinaliasvars list, because that would affect the attnums
|
|
|
|
* of Vars referencing the rest of the list.)
|
2004-08-19 22:57:41 +02:00
|
|
|
*
|
2017-08-16 06:22:32 +02:00
|
|
|
* inh is true for relation references that should be expanded to include
|
|
|
|
* inheritance children, if the rel has any. This *must* be false for
|
2002-03-12 01:52:10 +01:00
|
|
|
* RTEs other than RTE_RELATION entries.
|
2002-03-08 05:37:18 +01:00
|
|
|
*
|
|
|
|
* inFromCl marks those range variables that are listed in the FROM clause.
|
2005-10-26 21:21:55 +02:00
|
|
|
* It's false for RTEs that are added to a query behind the scenes, such
|
|
|
|
* as the NEW and OLD variables for a rule, or the subqueries of a UNION.
|
|
|
|
* This flag is not used anymore during parsing, since the parser now uses
|
|
|
|
* a separate "namespace" data structure to control visibility, but it is
|
|
|
|
* needed by ruleutils.c to determine whether RTEs should be shown in
|
|
|
|
* decompiled queries.
|
2002-03-08 05:37:18 +01:00
|
|
|
*
|
2004-01-15 00:01:55 +01:00
|
|
|
* requiredPerms and checkAsUser specify run-time access permissions
|
2014-05-06 18:12:18 +02:00
|
|
|
* checks to be performed at query startup. The user must have *all*
|
2004-01-15 00:01:55 +01:00
|
|
|
* of the permissions that are OR'd together in requiredPerms (zero
|
|
|
|
* indicates no permissions checking). If checkAsUser is not zero,
|
|
|
|
* then do the permissions checks using the access rights of that user,
|
|
|
|
* not the current effective user ID. (This allows rules to act as
|
2011-02-23 01:23:23 +01:00
|
|
|
* setuid gateways.) Permissions checks only apply to RELATION RTEs.
|
2009-01-22 21:16:10 +01:00
|
|
|
*
|
|
|
|
* For SELECT/INSERT/UPDATE permissions, if the user doesn't have
|
|
|
|
* table-wide permissions then it is sufficient to have the permissions
|
|
|
|
* on all columns identified in selectedCols (for SELECT) and/or
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
* insertedCols and/or updatedCols (INSERT with ON CONFLICT DO UPDATE may
|
|
|
|
* have all 3). selectedCols, insertedCols and updatedCols are bitmapsets,
|
|
|
|
* which cannot have negative integer members, so we subtract
|
2015-05-08 00:20:46 +02:00
|
|
|
* FirstLowInvalidHeapAttributeNumber from column numbers before storing
|
|
|
|
* them in these fields. A whole-row Var reference is represented by
|
|
|
|
* setting the bit for InvalidAttrNumber.
|
2016-11-10 22:16:33 +01:00
|
|
|
*
|
2019-03-30 08:13:09 +01:00
|
|
|
* updatedCols is also used in some other places, for example, to determine
|
|
|
|
* which triggers to fire and in FDWs to know which changed columns they
|
|
|
|
* need to ship off. Generated columns that are caused to be updated by an
|
|
|
|
* update to a base column are collected in extraUpdatedCols. This is not
|
|
|
|
* considered for permission checking, but it is useful in those places
|
|
|
|
* that want to know the full set of columns being updated as opposed to
|
|
|
|
* only the ones the user explicitly mentioned in the query. (There is
|
|
|
|
* currently no need for an extraInsertedCols, but it could exist.)
|
|
|
|
*
|
2016-11-10 22:16:33 +01:00
|
|
|
* securityQuals is a list of security barrier quals (boolean expressions),
|
|
|
|
* to be tested in the listed order before returning a row from the
|
|
|
|
* relation. It is always NIL in parser output. Entries are added by the
|
|
|
|
* rewriter to implement security-barrier views and/or row-level security.
|
|
|
|
* Note that the planner turns each boolean expression into an implicitly
|
|
|
|
* AND'ed sublist, as is its usual habit with qualification expressions.
|
2002-03-08 05:37:18 +01:00
|
|
|
*--------------------
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2002-03-12 01:52:10 +01:00
|
|
|
typedef enum RTEKind
|
|
|
|
{
|
|
|
|
RTE_RELATION, /* ordinary relation reference */
|
|
|
|
RTE_SUBQUERY, /* subquery in FROM */
|
|
|
|
RTE_JOIN, /* join */
|
2006-08-02 03:59:48 +02:00
|
|
|
RTE_FUNCTION, /* function in FROM */
|
2017-03-08 16:39:37 +01:00
|
|
|
RTE_TABLEFUNC, /* TableFunc(.., column list) */
|
2008-10-04 23:56:55 +02:00
|
|
|
RTE_VALUES, /* VALUES (<exprlist>), (<exprlist>), ... */
|
2017-04-01 06:17:18 +02:00
|
|
|
RTE_CTE, /* common table expr (WITH list element) */
|
In the planner, replace an empty FROM clause with a dummy RTE.
The fact that "SELECT expression" has no base relations has long been a
thorn in the side of the planner. It makes it hard to flatten a sub-query
that looks like that, or is a trivial VALUES() item, because the planner
generally uses relid sets to identify sub-relations, and such a sub-query
would have an empty relid set if we flattened it. prepjointree.c contains
some baroque logic that works around this in certain special cases --- but
there is a much better answer. We can replace an empty FROM clause with a
dummy RTE that acts like a table of one row and no columns, and then there
are no such corner cases to worry about. Instead we need some logic to
get rid of useless dummy RTEs, but that's simpler and covers more cases
than what was there before.
For really trivial cases, where the query is just "SELECT expression" and
nothing else, there's a hazard that adding the extra RTE makes for a
noticeable slowdown; even though it's not much processing, there's not
that much for the planner to do overall. However testing says that the
penalty is very small, close to the noise level. In more complex queries,
this is able to find optimizations that we could not find before.
The new RTE type is called RTE_RESULT, since the "scan" plan type it
gives rise to is a Result node (the same plan we produced for a "SELECT
expression" query before). To avoid confusion, rename the old ResultPath
path type to GroupResultPath, reflecting that it's only used in degenerate
grouping cases where we know the query produces just one grouped row.
(It wouldn't work to unify the two cases, because there are different
rules about where the associated quals live during query_planner.)
Note: although this touches readfuncs.c, I don't think a catversion
bump is required, because the added case can't occur in stored rules,
only plans.
Patch by me, reviewed by David Rowley and Mark Dilger
Discussion: https://postgr.es/m/15944.1521127664@sss.pgh.pa.us
2019-01-28 23:54:10 +01:00
|
|
|
RTE_NAMEDTUPLESTORE, /* tuplestore, e.g. for AFTER triggers */
|
|
|
|
RTE_RESULT /* RTE represents an empty FROM clause; such
|
|
|
|
* RTEs are added by the planner, they're not
|
|
|
|
* present during parsing or rewriting */
|
2002-03-12 01:52:10 +01:00
|
|
|
} RTEKind;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct RangeTblEntry
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2002-03-12 01:52:10 +01:00
|
|
|
RTEKind rtekind; /* see above */
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* XXX the fields applicable to only some rte kinds should be merged into
|
|
|
|
* a union. I didn't do this yet because the diffs would impact a lot of
|
2009-01-22 21:16:10 +01:00
|
|
|
* code that is being actively worked on. FIXME someday.
|
2002-03-12 01:52:10 +01:00
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2002-03-22 03:56:37 +01:00
|
|
|
* Fields valid for a plain relation RTE (else zero):
|
2017-06-14 22:19:46 +02:00
|
|
|
*
|
|
|
|
* As a special case, RTE_NAMEDTUPLESTORE can also set relid to indicate
|
|
|
|
* that the tuple format of the tuplestore is the same as the referenced
|
|
|
|
* relation. This allows plans referencing AFTER trigger transition
|
|
|
|
* tables to be invalidated if the underlying table is altered.
|
Create an RTE field to record the query's lock mode for each relation.
Add RangeTblEntry.rellockmode, which records the appropriate lock mode for
each RTE_RELATION rangetable entry (either AccessShareLock, RowShareLock,
or RowExclusiveLock depending on the RTE's role in the query).
This patch creates the field and makes all creators of RTE nodes fill it
in reasonably, but for the moment nothing much is done with it. The plan
is to replace assorted post-parser logic that re-determines the right
lockmode to use with simple uses of rte->rellockmode. For now, just add
Asserts in each of those places that the rellockmode matches what they are
computing today. (In some cases the match isn't perfect, so the Asserts
are weaker than you might expect; but this seems OK, as per discussion.)
This passes check-world for me, but it seems worth pushing in this state
to see if the buildfarm finds any problems in cases I failed to test.
catversion bump due to change of stored rules.
Amit Langote, reviewed by David Rowley and Jesper Pedersen,
and whacked around a bit more by me
Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-09-30 19:55:51 +02:00
|
|
|
*
|
|
|
|
* rellockmode is really LOCKMODE, but it's declared int to avoid having
|
|
|
|
* to include lock-related headers here. It must be RowExclusiveLock if
|
|
|
|
* the RTE is an INSERT/UPDATE/DELETE target, else RowShareLock if the RTE
|
|
|
|
* is a SELECT FOR UPDATE/FOR SHARE target, else AccessShareLock.
|
|
|
|
*
|
|
|
|
* Note: in some cases, rule expansion may result in RTEs that are marked
|
|
|
|
* with RowExclusiveLock even though they are not the target of the
|
|
|
|
* current query; this happens if a DO ALSO rule simply scans the original
|
|
|
|
* target table. We leave such RTEs with their original lockmode so as to
|
|
|
|
* avoid getting an additional, lesser lock.
|
2002-03-08 05:37:18 +01:00
|
|
|
*/
|
|
|
|
Oid relid; /* OID of the relation */
|
2011-02-23 01:23:23 +01:00
|
|
|
char relkind; /* relation kind (see pg_class.relkind) */
|
Create an RTE field to record the query's lock mode for each relation.
Add RangeTblEntry.rellockmode, which records the appropriate lock mode for
each RTE_RELATION rangetable entry (either AccessShareLock, RowShareLock,
or RowExclusiveLock depending on the RTE's role in the query).
This patch creates the field and makes all creators of RTE nodes fill it
in reasonably, but for the moment nothing much is done with it. The plan
is to replace assorted post-parser logic that re-determines the right
lockmode to use with simple uses of rte->rellockmode. For now, just add
Asserts in each of those places that the rellockmode matches what they are
computing today. (In some cases the match isn't perfect, so the Asserts
are weaker than you might expect; but this seems OK, as per discussion.)
This passes check-world for me, but it seems worth pushing in this state
to see if the buildfarm finds any problems in cases I failed to test.
catversion bump due to change of stored rules.
Amit Langote, reviewed by David Rowley and Jesper Pedersen,
and whacked around a bit more by me
Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-09-30 19:55:51 +02:00
|
|
|
int rellockmode; /* lock level that query requires on the rel */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
struct TableSampleClause *tablesample; /* sampling info, or NULL */
|
2000-10-18 18:16:18 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
|
|
|
* Fields valid for a subquery RTE (else NULL):
|
|
|
|
*/
|
|
|
|
Query *subquery; /* the sub-query */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
bool security_barrier; /* is from security_barrier view? */
|
2000-10-18 18:16:18 +02:00
|
|
|
|
2008-10-04 23:56:55 +02:00
|
|
|
/*
|
|
|
|
* Fields valid for a join RTE (else NULL/zero):
|
|
|
|
*
|
Change post-rewriter representation of dropped columns in joinaliasvars.
It's possible to drop a column from an input table of a JOIN clause in a
view, if that column is nowhere actually referenced in the view. But it
will still be there in the JOIN clause's joinaliasvars list. We used to
replace such entries with NULL Const nodes, which is handy for generation
of RowExpr expansion of a whole-row reference to the view. The trouble
with that is that it can't be distinguished from the situation after
subquery pull-up of a constant subquery output expression below the JOIN.
Instead, replace such joinaliasvars with null pointers (empty expression
trees), which can't be confused with pulled-up expressions. expandRTE()
still emits the old convention, though, for convenience of RowExpr
generation and to reduce the risk of breaking extension code.
In HEAD and 9.3, this patch also fixes a problem with some new code in
ruleutils.c that was failing to cope with implicitly-casted joinaliasvars
entries, as per recent report from Feike Steenbergen. That oversight was
because of an inadequate description of the data structure in parsenodes.h,
which I've now corrected. There were some pre-existing oversights of the
same ilk elsewhere, which I believe are now all fixed.
2013-07-23 22:23:01 +02:00
|
|
|
* joinaliasvars is a list of (usually) Vars corresponding to the columns
|
2014-05-06 18:12:18 +02:00
|
|
|
* of the join result. An alias Var referencing column K of the join
|
Change post-rewriter representation of dropped columns in joinaliasvars.
It's possible to drop a column from an input table of a JOIN clause in a
view, if that column is nowhere actually referenced in the view. But it
will still be there in the JOIN clause's joinaliasvars list. We used to
replace such entries with NULL Const nodes, which is handy for generation
of RowExpr expansion of a whole-row reference to the view. The trouble
with that is that it can't be distinguished from the situation after
subquery pull-up of a constant subquery output expression below the JOIN.
Instead, replace such joinaliasvars with null pointers (empty expression
trees), which can't be confused with pulled-up expressions. expandRTE()
still emits the old convention, though, for convenience of RowExpr
generation and to reduce the risk of breaking extension code.
In HEAD and 9.3, this patch also fixes a problem with some new code in
ruleutils.c that was failing to cope with implicitly-casted joinaliasvars
entries, as per recent report from Feike Steenbergen. That oversight was
because of an inadequate description of the data structure in parsenodes.h,
which I've now corrected. There were some pre-existing oversights of the
same ilk elsewhere, which I believe are now all fixed.
2013-07-23 22:23:01 +02:00
|
|
|
* result can be replaced by the K'th element of joinaliasvars --- but to
|
|
|
|
* simplify the task of reverse-listing aliases correctly, we do not do
|
|
|
|
* that until planning time. In detail: an element of joinaliasvars can
|
|
|
|
* be a Var of one of the join's input relations, or such a Var with an
|
|
|
|
* implicit coercion to the join's output column type, or a COALESCE
|
|
|
|
* expression containing the two input column Vars (possibly coerced).
|
Reconsider the representation of join alias Vars.
The core idea of this patch is to make the parser generate join alias
Vars (that is, ones with varno pointing to a JOIN RTE) only when the
alias Var is actually different from any raw join input, that is a type
coercion and/or COALESCE is necessary to generate the join output value.
Otherwise just generate varno/varattno pointing to the relevant join
input column.
In effect, this means that the planner's flatten_join_alias_vars()
transformation is already done in the parser, for all cases except
(a) columns that are merged by JOIN USING and are transformed in the
process, and (b) whole-row join Vars. In principle that would allow
us to skip doing flatten_join_alias_vars() in many more queries than
we do now, but we don't have quite enough infrastructure to know that
we can do so --- in particular there's no cheap way to know whether
there are any whole-row join Vars. I'm not sure if it's worth the
trouble to add a Query-level flag for that, and in any case it seems
like fit material for a separate patch. But even without skipping the
work entirely, this should make flatten_join_alias_vars() faster,
particularly where there are nested joins that it previously had to
flatten recursively.
An essential part of this change is to replace Var nodes'
varnoold/varoattno fields with varnosyn/varattnosyn, which have
considerably more tightly-defined meanings than the old fields: when
they differ from varno/varattno, they identify the Var's position in
an aliased JOIN RTE, and the join alias is what ruleutils.c should
print for the Var. This is necessary because the varno change
destroyed ruleutils.c's ability to find the JOIN RTE from the Var's
varno.
Another way in which this change broke ruleutils.c is that it's no
longer feasible to determine, from a JOIN RTE's joinaliasvars list,
which join columns correspond to which columns of the join's immediate
input relations. (If those are sub-joins, the joinaliasvars entries
may point to columns of their base relations, not the sub-joins.)
But that was a horrid mess requiring a lot of fragile assumptions
already, so let's just bite the bullet and add some more JOIN RTE
fields to make it more straightforward to figure that out. I added
two integer-List fields containing the relevant column numbers from
the left and right input rels, plus a count of how many merged columns
there are.
This patch depends on the ParseNamespaceColumn infrastructure that
I added in commit 5815696bc. The biggest bit of code change is
restructuring transformFromClauseItem's handling of JOINs so that
the ParseNamespaceColumn data is propagated upward correctly.
Other than that and the ruleutils fixes, everything pretty much
just works, though some processing is now inessential. I grabbed
two pieces of low-hanging fruit in that line:
1. In find_expr_references, we don't need to recurse into join alias
Vars anymore. There aren't any except for references to merged USING
columns, which are more properly handled when we scan the join's RTE.
This change actually fixes an edge-case issue: we will now record a
dependency on any type-coercion function present in a USING column's
joinaliasvar, even if that join column has no references in the query
text. The odds of the missing dependency causing a problem seem quite
small: you'd have to posit somebody dropping an implicit cast between
two data types, without removing the types themselves, and then having
a stored rule containing a whole-row Var for a join whose USING merge
depends on that cast. So I don't feel a great need to change this in
the back branches. But in theory this way is more correct.
2. markRTEForSelectPriv and markTargetListOrigin don't need to recurse
into join alias Vars either, because the cases they care about don't
apply to alias Vars for USING columns that are semantically distinct
from the underlying columns. This removes the only case in which
markVarForSelectPriv could be called with NULL for the RTE, so adjust
the comments to describe that hack as being strictly internal to
markRTEForSelectPriv.
catversion bump required due to changes in stored rules.
Discussion: https://postgr.es/m/7115.1577986646@sss.pgh.pa.us
2020-01-09 17:56:59 +01:00
|
|
|
* Elements beyond the first joinmergedcols entries are always just Vars,
|
|
|
|
* and are never referenced from elsewhere in the query (that is, join
|
|
|
|
* alias Vars are generated only for merged columns). We keep these
|
|
|
|
* entries only because they're needed in expandRTE() and similar code.
|
|
|
|
*
|
|
|
|
* Within a Query loaded from a stored rule, it is possible for non-merged
|
Change post-rewriter representation of dropped columns in joinaliasvars.
It's possible to drop a column from an input table of a JOIN clause in a
view, if that column is nowhere actually referenced in the view. But it
will still be there in the JOIN clause's joinaliasvars list. We used to
replace such entries with NULL Const nodes, which is handy for generation
of RowExpr expansion of a whole-row reference to the view. The trouble
with that is that it can't be distinguished from the situation after
subquery pull-up of a constant subquery output expression below the JOIN.
Instead, replace such joinaliasvars with null pointers (empty expression
trees), which can't be confused with pulled-up expressions. expandRTE()
still emits the old convention, though, for convenience of RowExpr
generation and to reduce the risk of breaking extension code.
In HEAD and 9.3, this patch also fixes a problem with some new code in
ruleutils.c that was failing to cope with implicitly-casted joinaliasvars
entries, as per recent report from Feike Steenbergen. That oversight was
because of an inadequate description of the data structure in parsenodes.h,
which I've now corrected. There were some pre-existing oversights of the
same ilk elsewhere, which I believe are now all fixed.
2013-07-23 22:23:01 +02:00
|
|
|
* joinaliasvars items to be null pointers, which are placeholders for
|
|
|
|
* (necessarily unreferenced) columns dropped since the rule was made.
|
|
|
|
* Also, once planning begins, joinaliasvars items can be almost anything,
|
|
|
|
* as a result of subquery-flattening substitutions.
|
Reconsider the representation of join alias Vars.
The core idea of this patch is to make the parser generate join alias
Vars (that is, ones with varno pointing to a JOIN RTE) only when the
alias Var is actually different from any raw join input, that is a type
coercion and/or COALESCE is necessary to generate the join output value.
Otherwise just generate varno/varattno pointing to the relevant join
input column.
In effect, this means that the planner's flatten_join_alias_vars()
transformation is already done in the parser, for all cases except
(a) columns that are merged by JOIN USING and are transformed in the
process, and (b) whole-row join Vars. In principle that would allow
us to skip doing flatten_join_alias_vars() in many more queries than
we do now, but we don't have quite enough infrastructure to know that
we can do so --- in particular there's no cheap way to know whether
there are any whole-row join Vars. I'm not sure if it's worth the
trouble to add a Query-level flag for that, and in any case it seems
like fit material for a separate patch. But even without skipping the
work entirely, this should make flatten_join_alias_vars() faster,
particularly where there are nested joins that it previously had to
flatten recursively.
An essential part of this change is to replace Var nodes'
varnoold/varoattno fields with varnosyn/varattnosyn, which have
considerably more tightly-defined meanings than the old fields: when
they differ from varno/varattno, they identify the Var's position in
an aliased JOIN RTE, and the join alias is what ruleutils.c should
print for the Var. This is necessary because the varno change
destroyed ruleutils.c's ability to find the JOIN RTE from the Var's
varno.
Another way in which this change broke ruleutils.c is that it's no
longer feasible to determine, from a JOIN RTE's joinaliasvars list,
which join columns correspond to which columns of the join's immediate
input relations. (If those are sub-joins, the joinaliasvars entries
may point to columns of their base relations, not the sub-joins.)
But that was a horrid mess requiring a lot of fragile assumptions
already, so let's just bite the bullet and add some more JOIN RTE
fields to make it more straightforward to figure that out. I added
two integer-List fields containing the relevant column numbers from
the left and right input rels, plus a count of how many merged columns
there are.
This patch depends on the ParseNamespaceColumn infrastructure that
I added in commit 5815696bc. The biggest bit of code change is
restructuring transformFromClauseItem's handling of JOINs so that
the ParseNamespaceColumn data is propagated upward correctly.
Other than that and the ruleutils fixes, everything pretty much
just works, though some processing is now inessential. I grabbed
two pieces of low-hanging fruit in that line:
1. In find_expr_references, we don't need to recurse into join alias
Vars anymore. There aren't any except for references to merged USING
columns, which are more properly handled when we scan the join's RTE.
This change actually fixes an edge-case issue: we will now record a
dependency on any type-coercion function present in a USING column's
joinaliasvar, even if that join column has no references in the query
text. The odds of the missing dependency causing a problem seem quite
small: you'd have to posit somebody dropping an implicit cast between
two data types, without removing the types themselves, and then having
a stored rule containing a whole-row Var for a join whose USING merge
depends on that cast. So I don't feel a great need to change this in
the back branches. But in theory this way is more correct.
2. markRTEForSelectPriv and markTargetListOrigin don't need to recurse
into join alias Vars either, because the cases they care about don't
apply to alias Vars for USING columns that are semantically distinct
from the underlying columns. This removes the only case in which
markVarForSelectPriv could be called with NULL for the RTE, so adjust
the comments to describe that hack as being strictly internal to
markRTEForSelectPriv.
catversion bump required due to changes in stored rules.
Discussion: https://postgr.es/m/7115.1577986646@sss.pgh.pa.us
2020-01-09 17:56:59 +01:00
|
|
|
*
|
|
|
|
* joinleftcols is an integer list of physical column numbers of the left
|
|
|
|
* join input rel that are included in the join; likewise joinrighttcols
|
|
|
|
* for the right join input rel. (Which rels those are can be determined
|
|
|
|
* from the associated JoinExpr.) If the join is USING/NATURAL, then the
|
|
|
|
* first joinmergedcols entries in each list identify the merged columns.
|
|
|
|
* The merged columns come first in the join output, then remaining
|
|
|
|
* columns of the left input, then remaining columns of the right.
|
|
|
|
*
|
|
|
|
* Note that input columns could have been dropped after creation of a
|
|
|
|
* stored rule, if they are not referenced in the query (in particular,
|
|
|
|
* merged columns could not be dropped); this is not accounted for in
|
|
|
|
* joinleftcols/joinrighttcols.
|
2008-10-04 23:56:55 +02:00
|
|
|
*/
|
|
|
|
JoinType jointype; /* type of join */
|
Reconsider the representation of join alias Vars.
The core idea of this patch is to make the parser generate join alias
Vars (that is, ones with varno pointing to a JOIN RTE) only when the
alias Var is actually different from any raw join input, that is a type
coercion and/or COALESCE is necessary to generate the join output value.
Otherwise just generate varno/varattno pointing to the relevant join
input column.
In effect, this means that the planner's flatten_join_alias_vars()
transformation is already done in the parser, for all cases except
(a) columns that are merged by JOIN USING and are transformed in the
process, and (b) whole-row join Vars. In principle that would allow
us to skip doing flatten_join_alias_vars() in many more queries than
we do now, but we don't have quite enough infrastructure to know that
we can do so --- in particular there's no cheap way to know whether
there are any whole-row join Vars. I'm not sure if it's worth the
trouble to add a Query-level flag for that, and in any case it seems
like fit material for a separate patch. But even without skipping the
work entirely, this should make flatten_join_alias_vars() faster,
particularly where there are nested joins that it previously had to
flatten recursively.
An essential part of this change is to replace Var nodes'
varnoold/varoattno fields with varnosyn/varattnosyn, which have
considerably more tightly-defined meanings than the old fields: when
they differ from varno/varattno, they identify the Var's position in
an aliased JOIN RTE, and the join alias is what ruleutils.c should
print for the Var. This is necessary because the varno change
destroyed ruleutils.c's ability to find the JOIN RTE from the Var's
varno.
Another way in which this change broke ruleutils.c is that it's no
longer feasible to determine, from a JOIN RTE's joinaliasvars list,
which join columns correspond to which columns of the join's immediate
input relations. (If those are sub-joins, the joinaliasvars entries
may point to columns of their base relations, not the sub-joins.)
But that was a horrid mess requiring a lot of fragile assumptions
already, so let's just bite the bullet and add some more JOIN RTE
fields to make it more straightforward to figure that out. I added
two integer-List fields containing the relevant column numbers from
the left and right input rels, plus a count of how many merged columns
there are.
This patch depends on the ParseNamespaceColumn infrastructure that
I added in commit 5815696bc. The biggest bit of code change is
restructuring transformFromClauseItem's handling of JOINs so that
the ParseNamespaceColumn data is propagated upward correctly.
Other than that and the ruleutils fixes, everything pretty much
just works, though some processing is now inessential. I grabbed
two pieces of low-hanging fruit in that line:
1. In find_expr_references, we don't need to recurse into join alias
Vars anymore. There aren't any except for references to merged USING
columns, which are more properly handled when we scan the join's RTE.
This change actually fixes an edge-case issue: we will now record a
dependency on any type-coercion function present in a USING column's
joinaliasvar, even if that join column has no references in the query
text. The odds of the missing dependency causing a problem seem quite
small: you'd have to posit somebody dropping an implicit cast between
two data types, without removing the types themselves, and then having
a stored rule containing a whole-row Var for a join whose USING merge
depends on that cast. So I don't feel a great need to change this in
the back branches. But in theory this way is more correct.
2. markRTEForSelectPriv and markTargetListOrigin don't need to recurse
into join alias Vars either, because the cases they care about don't
apply to alias Vars for USING columns that are semantically distinct
from the underlying columns. This removes the only case in which
markVarForSelectPriv could be called with NULL for the RTE, so adjust
the comments to describe that hack as being strictly internal to
markRTEForSelectPriv.
catversion bump required due to changes in stored rules.
Discussion: https://postgr.es/m/7115.1577986646@sss.pgh.pa.us
2020-01-09 17:56:59 +01:00
|
|
|
int joinmergedcols; /* number of merged (JOIN USING) columns */
|
2008-10-04 23:56:55 +02:00
|
|
|
List *joinaliasvars; /* list of alias-var expansions */
|
Reconsider the representation of join alias Vars.
The core idea of this patch is to make the parser generate join alias
Vars (that is, ones with varno pointing to a JOIN RTE) only when the
alias Var is actually different from any raw join input, that is a type
coercion and/or COALESCE is necessary to generate the join output value.
Otherwise just generate varno/varattno pointing to the relevant join
input column.
In effect, this means that the planner's flatten_join_alias_vars()
transformation is already done in the parser, for all cases except
(a) columns that are merged by JOIN USING and are transformed in the
process, and (b) whole-row join Vars. In principle that would allow
us to skip doing flatten_join_alias_vars() in many more queries than
we do now, but we don't have quite enough infrastructure to know that
we can do so --- in particular there's no cheap way to know whether
there are any whole-row join Vars. I'm not sure if it's worth the
trouble to add a Query-level flag for that, and in any case it seems
like fit material for a separate patch. But even without skipping the
work entirely, this should make flatten_join_alias_vars() faster,
particularly where there are nested joins that it previously had to
flatten recursively.
An essential part of this change is to replace Var nodes'
varnoold/varoattno fields with varnosyn/varattnosyn, which have
considerably more tightly-defined meanings than the old fields: when
they differ from varno/varattno, they identify the Var's position in
an aliased JOIN RTE, and the join alias is what ruleutils.c should
print for the Var. This is necessary because the varno change
destroyed ruleutils.c's ability to find the JOIN RTE from the Var's
varno.
Another way in which this change broke ruleutils.c is that it's no
longer feasible to determine, from a JOIN RTE's joinaliasvars list,
which join columns correspond to which columns of the join's immediate
input relations. (If those are sub-joins, the joinaliasvars entries
may point to columns of their base relations, not the sub-joins.)
But that was a horrid mess requiring a lot of fragile assumptions
already, so let's just bite the bullet and add some more JOIN RTE
fields to make it more straightforward to figure that out. I added
two integer-List fields containing the relevant column numbers from
the left and right input rels, plus a count of how many merged columns
there are.
This patch depends on the ParseNamespaceColumn infrastructure that
I added in commit 5815696bc. The biggest bit of code change is
restructuring transformFromClauseItem's handling of JOINs so that
the ParseNamespaceColumn data is propagated upward correctly.
Other than that and the ruleutils fixes, everything pretty much
just works, though some processing is now inessential. I grabbed
two pieces of low-hanging fruit in that line:
1. In find_expr_references, we don't need to recurse into join alias
Vars anymore. There aren't any except for references to merged USING
columns, which are more properly handled when we scan the join's RTE.
This change actually fixes an edge-case issue: we will now record a
dependency on any type-coercion function present in a USING column's
joinaliasvar, even if that join column has no references in the query
text. The odds of the missing dependency causing a problem seem quite
small: you'd have to posit somebody dropping an implicit cast between
two data types, without removing the types themselves, and then having
a stored rule containing a whole-row Var for a join whose USING merge
depends on that cast. So I don't feel a great need to change this in
the back branches. But in theory this way is more correct.
2. markRTEForSelectPriv and markTargetListOrigin don't need to recurse
into join alias Vars either, because the cases they care about don't
apply to alias Vars for USING columns that are semantically distinct
from the underlying columns. This removes the only case in which
markVarForSelectPriv could be called with NULL for the RTE, so adjust
the comments to describe that hack as being strictly internal to
markRTEForSelectPriv.
catversion bump required due to changes in stored rules.
Discussion: https://postgr.es/m/7115.1577986646@sss.pgh.pa.us
2020-01-09 17:56:59 +01:00
|
|
|
List *joinleftcols; /* left-side input column numbers */
|
|
|
|
List *joinrightcols; /* right-side input column numbers */
|
2008-10-04 23:56:55 +02:00
|
|
|
|
2002-05-12 22:10:05 +02:00
|
|
|
/*
|
Support multi-argument UNNEST(), and TABLE() syntax for multiple functions.
This patch adds the ability to write TABLE( function1(), function2(), ...)
as a single FROM-clause entry. The result is the concatenation of the
first row from each function, followed by the second row from each
function, etc; with NULLs inserted if any function produces fewer rows than
others. This is believed to be a much more useful behavior than what
Postgres currently does with multiple SRFs in a SELECT list.
This syntax also provides a reasonable way to combine use of column
definition lists with WITH ORDINALITY: put the column definition list
inside TABLE(), where it's clear that it doesn't control the ordinality
column as well.
Also implement SQL-compliant multiple-argument UNNEST(), by turning
UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
The SQL standard specifies TABLE() with only a single function, not
multiple functions, and it seems to require an implicit UNNEST() which is
not what this patch does. There may be something wrong with that reading
of the spec, though, because if it's right then the spec's TABLE() is just
a pointless alternative spelling of UNNEST(). After further review of
that, we might choose to adopt a different syntax for what this patch does,
but in any case this functionality seems clearly worthwhile.
Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
significantly revised by me
2013-11-22 01:37:02 +01:00
|
|
|
* Fields valid for a function RTE (else NIL/zero):
|
2006-03-16 01:31:55 +01:00
|
|
|
*
|
Support multi-argument UNNEST(), and TABLE() syntax for multiple functions.
This patch adds the ability to write TABLE( function1(), function2(), ...)
as a single FROM-clause entry. The result is the concatenation of the
first row from each function, followed by the second row from each
function, etc; with NULLs inserted if any function produces fewer rows than
others. This is believed to be a much more useful behavior than what
Postgres currently does with multiple SRFs in a SELECT list.
This syntax also provides a reasonable way to combine use of column
definition lists with WITH ORDINALITY: put the column definition list
inside TABLE(), where it's clear that it doesn't control the ordinality
column as well.
Also implement SQL-compliant multiple-argument UNNEST(), by turning
UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
The SQL standard specifies TABLE() with only a single function, not
multiple functions, and it seems to require an implicit UNNEST() which is
not what this patch does. There may be something wrong with that reading
of the spec, though, because if it's right then the spec's TABLE() is just
a pointless alternative spelling of UNNEST(). After further review of
that, we might choose to adopt a different syntax for what this patch does,
but in any case this functionality seems clearly worthwhile.
Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
significantly revised by me
2013-11-22 01:37:02 +01:00
|
|
|
* When funcordinality is true, the eref->colnames list includes an alias
|
|
|
|
* for the ordinality column. The ordinality column is otherwise
|
|
|
|
* implicit, and must be accounted for "by hand" in places such as
|
|
|
|
* expandRTE().
|
2002-05-12 22:10:05 +02:00
|
|
|
*/
|
Support multi-argument UNNEST(), and TABLE() syntax for multiple functions.
This patch adds the ability to write TABLE( function1(), function2(), ...)
as a single FROM-clause entry. The result is the concatenation of the
first row from each function, followed by the second row from each
function, etc; with NULLs inserted if any function produces fewer rows than
others. This is believed to be a much more useful behavior than what
Postgres currently does with multiple SRFs in a SELECT list.
This syntax also provides a reasonable way to combine use of column
definition lists with WITH ORDINALITY: put the column definition list
inside TABLE(), where it's clear that it doesn't control the ordinality
column as well.
Also implement SQL-compliant multiple-argument UNNEST(), by turning
UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
The SQL standard specifies TABLE() with only a single function, not
multiple functions, and it seems to require an implicit UNNEST() which is
not what this patch does. There may be something wrong with that reading
of the spec, though, because if it's right then the spec's TABLE() is just
a pointless alternative spelling of UNNEST(). After further review of
that, we might choose to adopt a different syntax for what this patch does,
but in any case this functionality seems clearly worthwhile.
Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
significantly revised by me
2013-11-22 01:37:02 +01:00
|
|
|
List *functions; /* list of RangeTblFunction nodes */
|
|
|
|
bool funcordinality; /* is this called WITH ORDINALITY? */
|
2002-05-12 22:10:05 +02:00
|
|
|
|
2017-03-08 16:39:37 +01:00
|
|
|
/*
|
|
|
|
* Fields valid for a TableFunc RTE (else NULL):
|
|
|
|
*/
|
|
|
|
TableFunc *tablefunc;
|
|
|
|
|
2006-08-02 03:59:48 +02:00
|
|
|
/*
|
|
|
|
* Fields valid for a values RTE (else NIL):
|
|
|
|
*/
|
|
|
|
List *values_lists; /* list of expression lists */
|
|
|
|
|
2002-03-12 01:52:10 +01:00
|
|
|
/*
|
2008-10-04 23:56:55 +02:00
|
|
|
* Fields valid for a CTE RTE (else NULL/zero):
|
2002-03-12 01:52:10 +01:00
|
|
|
*/
|
2008-10-04 23:56:55 +02:00
|
|
|
char *ctename; /* name of the WITH list item */
|
|
|
|
Index ctelevelsup; /* number of query levels up */
|
2009-06-11 16:49:15 +02:00
|
|
|
bool self_reference; /* is this a recursive self-reference? */
|
Fix reporting of column typmods for multi-row VALUES constructs.
expandRTE() and get_rte_attribute_type() reported the exprType() and
exprTypmod() values of the expressions in the first row of the VALUES as
being the column type/typmod returned by the VALUES RTE. That's fine for
the data type, since we coerce all expressions in a column to have the same
common type. But we don't coerce them to have a common typmod, so it was
possible for rows after the first one to return values that violate the
claimed column typmod. This leads to the incorrect result seen in bug
#14448 from Hassan Mahmood, as well as some other corner-case misbehaviors.
The desired behavior is the same as we use in other type-unification
cases: report the common typmod if there is one, but otherwise return -1
indicating no particular constraint. It's cheap for transformValuesClause
to determine the common typmod while transforming a multi-row VALUES, but
it'd be less cheap for expandRTE() and get_rte_attribute_type() to
re-determine that info every time they're asked --- possibly a lot less
cheap, if the VALUES has many rows. Therefore, the best fix is to record
the common typmods explicitly in a list in the VALUES RTE, as we were
already doing for column collations. This looks quite a bit like what
we're doing for CTE RTEs, so we can save a little bit of space and code by
unifying the representation for those two RTE types. They both now share
coltypes/coltypmods/colcollations fields. (At some point it might seem
desirable to populate those fields for all RTE types; but right now it
looks like constructing them for other RTE types would add more code and
cycles than it would save.)
The RTE change requires a catversion bump, so this fix is only usable
in HEAD. If we fix this at all in the back branches, the patch will
need to look quite different.
Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org
Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
|
|
|
|
|
|
|
/*
|
Fix some probably-minor oversights in readfuncs.c.
The system expects TABLEFUNC RTEs to have coltypes, coltypmods, and
colcollations lists, but outfuncs doesn't dump them and readfuncs doesn't
restore them. This doesn't cause obvious failures, because the only things
that look at those fields are expandRTE() and get_rte_attribute_type(),
which are mostly used during parse analysis, before anything would've
passed the parsetree through outfuncs/readfuncs. But expandRTE() is used
in build_physical_tlist(), which means that that function will return a
wrong answer for a TABLEFUNC RTE that came from a view. Very accidentally,
this doesn't cause serious problems, because what it will return is NIL
which callers will interpret as "couldn't build a physical tlist because
of dropped columns". So you still get a plan that works, though it's
marginally less efficient than it could be. There are also some other
expandRTE() calls associated with transformation of whole-row Vars in
the planner. I have been unable to exhibit misbehavior from that, and
it may be unreachable in any case that anyone would care about ... but
I'm not entirely convinced, so this seems like something we should back-
patch a fix for. Fortunately, we can fix it without forcing a change
of stored rules and a catversion bump, because we can just copy these
lists from the subsidiary TableFunc object.
readfuncs.c was also missing support for NamedTuplestoreScan plan nodes.
This accidentally fails to break parallel query because a query using
a named tuplestore would never be considered parallel-safe anyway.
However, project policy since parallel query came in is that all plan
node types should have outfuncs/readfuncs support, so this is clearly
an oversight that should be repaired.
Noted while fooling around with a patch to test outfuncs/readfuncs more
thoroughly. That exposed some other issues too, but these are the only
ones that seem worth back-patching.
Back-patch to v10 where both of these features came in.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
2018-09-18 19:02:27 +02:00
|
|
|
* Fields valid for CTE, VALUES, ENR, and TableFunc RTEs (else NIL):
|
Fix reporting of column typmods for multi-row VALUES constructs.
expandRTE() and get_rte_attribute_type() reported the exprType() and
exprTypmod() values of the expressions in the first row of the VALUES as
being the column type/typmod returned by the VALUES RTE. That's fine for
the data type, since we coerce all expressions in a column to have the same
common type. But we don't coerce them to have a common typmod, so it was
possible for rows after the first one to return values that violate the
claimed column typmod. This leads to the incorrect result seen in bug
#14448 from Hassan Mahmood, as well as some other corner-case misbehaviors.
The desired behavior is the same as we use in other type-unification
cases: report the common typmod if there is one, but otherwise return -1
indicating no particular constraint. It's cheap for transformValuesClause
to determine the common typmod while transforming a multi-row VALUES, but
it'd be less cheap for expandRTE() and get_rte_attribute_type() to
re-determine that info every time they're asked --- possibly a lot less
cheap, if the VALUES has many rows. Therefore, the best fix is to record
the common typmods explicitly in a list in the VALUES RTE, as we were
already doing for column collations. This looks quite a bit like what
we're doing for CTE RTEs, so we can save a little bit of space and code by
unifying the representation for those two RTE types. They both now share
coltypes/coltypmods/colcollations fields. (At some point it might seem
desirable to populate those fields for all RTE types; but right now it
looks like constructing them for other RTE types would add more code and
cycles than it would save.)
The RTE change requires a catversion bump, so this fix is only usable
in HEAD. If we fix this at all in the back branches, the patch will
need to look quite different.
Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org
Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
|
|
|
*
|
|
|
|
* We need these for CTE RTEs so that the types of self-referential
|
|
|
|
* columns are well-defined. For VALUES RTEs, storing these explicitly
|
2017-06-21 20:09:24 +02:00
|
|
|
* saves having to re-determine the info by scanning the values_lists. For
|
|
|
|
* ENRs, we store the types explicitly here (we could get the information
|
|
|
|
* from the catalogs if 'relid' was supplied, but we'd still need these
|
|
|
|
* for TupleDesc-based ENRs, so we might as well always store the type
|
Fix some probably-minor oversights in readfuncs.c.
The system expects TABLEFUNC RTEs to have coltypes, coltypmods, and
colcollations lists, but outfuncs doesn't dump them and readfuncs doesn't
restore them. This doesn't cause obvious failures, because the only things
that look at those fields are expandRTE() and get_rte_attribute_type(),
which are mostly used during parse analysis, before anything would've
passed the parsetree through outfuncs/readfuncs. But expandRTE() is used
in build_physical_tlist(), which means that that function will return a
wrong answer for a TABLEFUNC RTE that came from a view. Very accidentally,
this doesn't cause serious problems, because what it will return is NIL
which callers will interpret as "couldn't build a physical tlist because
of dropped columns". So you still get a plan that works, though it's
marginally less efficient than it could be. There are also some other
expandRTE() calls associated with transformation of whole-row Vars in
the planner. I have been unable to exhibit misbehavior from that, and
it may be unreachable in any case that anyone would care about ... but
I'm not entirely convinced, so this seems like something we should back-
patch a fix for. Fortunately, we can fix it without forcing a change
of stored rules and a catversion bump, because we can just copy these
lists from the subsidiary TableFunc object.
readfuncs.c was also missing support for NamedTuplestoreScan plan nodes.
This accidentally fails to break parallel query because a query using
a named tuplestore would never be considered parallel-safe anyway.
However, project policy since parallel query came in is that all plan
node types should have outfuncs/readfuncs support, so this is clearly
an oversight that should be repaired.
Noted while fooling around with a patch to test outfuncs/readfuncs more
thoroughly. That exposed some other issues too, but these are the only
ones that seem worth back-patching.
Back-patch to v10 where both of these features came in.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
2018-09-18 19:02:27 +02:00
|
|
|
* info here). For TableFuncs, these fields are redundant with data in
|
|
|
|
* the TableFunc node, but keeping them here allows some code sharing with
|
|
|
|
* the other cases.
|
2017-09-06 16:41:05 +02:00
|
|
|
*
|
|
|
|
* For ENRs only, we have to consider the possibility of dropped columns.
|
|
|
|
* A dropped column is included in these lists, but it will have zeroes in
|
|
|
|
* all three lists (as well as an empty-string entry in eref). Testing
|
|
|
|
* for zero coltype is the standard way to detect a dropped column.
|
Fix reporting of column typmods for multi-row VALUES constructs.
expandRTE() and get_rte_attribute_type() reported the exprType() and
exprTypmod() values of the expressions in the first row of the VALUES as
being the column type/typmod returned by the VALUES RTE. That's fine for
the data type, since we coerce all expressions in a column to have the same
common type. But we don't coerce them to have a common typmod, so it was
possible for rows after the first one to return values that violate the
claimed column typmod. This leads to the incorrect result seen in bug
#14448 from Hassan Mahmood, as well as some other corner-case misbehaviors.
The desired behavior is the same as we use in other type-unification
cases: report the common typmod if there is one, but otherwise return -1
indicating no particular constraint. It's cheap for transformValuesClause
to determine the common typmod while transforming a multi-row VALUES, but
it'd be less cheap for expandRTE() and get_rte_attribute_type() to
re-determine that info every time they're asked --- possibly a lot less
cheap, if the VALUES has many rows. Therefore, the best fix is to record
the common typmods explicitly in a list in the VALUES RTE, as we were
already doing for column collations. This looks quite a bit like what
we're doing for CTE RTEs, so we can save a little bit of space and code by
unifying the representation for those two RTE types. They both now share
coltypes/coltypmods/colcollations fields. (At some point it might seem
desirable to populate those fields for all RTE types; but right now it
looks like constructing them for other RTE types would add more code and
cycles than it would save.)
The RTE change requires a catversion bump, so this fix is only usable
in HEAD. If we fix this at all in the back branches, the patch will
need to look quite different.
Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org
Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
2016-12-08 17:40:02 +01:00
|
|
|
*/
|
|
|
|
List *coltypes; /* OID list of column type OIDs */
|
|
|
|
List *coltypmods; /* integer list of column typmods */
|
|
|
|
List *colcollations; /* OID list of column collation OIDs */
|
2002-03-12 01:52:10 +01:00
|
|
|
|
2017-06-14 22:19:46 +02:00
|
|
|
/*
|
|
|
|
* Fields valid for ENR RTEs (else NULL/zero):
|
|
|
|
*/
|
2017-04-01 06:17:18 +02:00
|
|
|
char *enrname; /* name of ephemeral named relation */
|
|
|
|
double enrtuples; /* estimated or actual from caller */
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
|
|
|
* Fields valid in all RTEs:
|
|
|
|
*/
|
2002-03-21 17:02:16 +01:00
|
|
|
Alias *alias; /* user-written alias clause, if any */
|
|
|
|
Alias *eref; /* expanded reference names */
|
2012-08-19 20:12:16 +02:00
|
|
|
bool lateral; /* subquery, function, or values is LATERAL? */
|
2002-03-08 05:37:18 +01:00
|
|
|
bool inh; /* inheritance requested? */
|
2005-06-05 02:38:11 +02:00
|
|
|
bool inFromCl; /* present in FROM clause? */
|
2004-01-15 00:01:55 +01:00
|
|
|
AclMode requiredPerms; /* bitmask of required access permissions */
|
2005-06-28 07:09:14 +02:00
|
|
|
Oid checkAsUser; /* if valid, check access as this role */
|
2009-01-22 21:16:10 +01:00
|
|
|
Bitmapset *selectedCols; /* columns needing SELECT permission */
|
2015-05-08 00:20:46 +02:00
|
|
|
Bitmapset *insertedCols; /* columns needing INSERT permission */
|
|
|
|
Bitmapset *updatedCols; /* columns needing UPDATE permission */
|
2019-03-30 08:13:09 +01:00
|
|
|
Bitmapset *extraUpdatedCols; /* generated columns being updated */
|
2016-11-10 22:16:33 +01:00
|
|
|
List *securityQuals; /* security barrier quals to apply, if any */
|
2002-03-08 05:37:18 +01:00
|
|
|
} RangeTblEntry;
|
2000-10-18 18:16:18 +02:00
|
|
|
|
Support multi-argument UNNEST(), and TABLE() syntax for multiple functions.
This patch adds the ability to write TABLE( function1(), function2(), ...)
as a single FROM-clause entry. The result is the concatenation of the
first row from each function, followed by the second row from each
function, etc; with NULLs inserted if any function produces fewer rows than
others. This is believed to be a much more useful behavior than what
Postgres currently does with multiple SRFs in a SELECT list.
This syntax also provides a reasonable way to combine use of column
definition lists with WITH ORDINALITY: put the column definition list
inside TABLE(), where it's clear that it doesn't control the ordinality
column as well.
Also implement SQL-compliant multiple-argument UNNEST(), by turning
UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
The SQL standard specifies TABLE() with only a single function, not
multiple functions, and it seems to require an implicit UNNEST() which is
not what this patch does. There may be something wrong with that reading
of the spec, though, because if it's right then the spec's TABLE() is just
a pointless alternative spelling of UNNEST(). After further review of
that, we might choose to adopt a different syntax for what this patch does,
but in any case this functionality seems clearly worthwhile.
Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
significantly revised by me
2013-11-22 01:37:02 +01:00
|
|
|
/*
|
|
|
|
* RangeTblFunction -
|
|
|
|
* RangeTblEntry subsidiary data for one function in a FUNCTION RTE.
|
|
|
|
*
|
|
|
|
* If the function had a column definition list (required for an
|
|
|
|
* otherwise-unspecified RECORD result), funccolnames lists the names given
|
|
|
|
* in the definition list, funccoltypes lists their declared column types,
|
|
|
|
* funccoltypmods lists their typmods, funccolcollations their collations.
|
|
|
|
* Otherwise, those fields are NIL.
|
|
|
|
*
|
|
|
|
* Notice we don't attempt to store info about the results of functions
|
|
|
|
* returning named composite types, because those can change from time to
|
|
|
|
* time. We do however remember how many columns we thought the type had
|
|
|
|
* (including dropped columns!), so that we can successfully ignore any
|
|
|
|
* columns added after the query was parsed.
|
|
|
|
*/
|
|
|
|
typedef struct RangeTblFunction
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
|
|
|
|
Node *funcexpr; /* expression tree for func call */
|
|
|
|
int funccolcount; /* number of columns it contributes to RTE */
|
|
|
|
/* These fields record the contents of a column definition list, if any: */
|
|
|
|
List *funccolnames; /* column names (list of String) */
|
|
|
|
List *funccoltypes; /* OID list of column type OIDs */
|
|
|
|
List *funccoltypmods; /* integer list of column typmods */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
List *funccolcollations; /* OID list of column collation OIDs */
|
Support multi-argument UNNEST(), and TABLE() syntax for multiple functions.
This patch adds the ability to write TABLE( function1(), function2(), ...)
as a single FROM-clause entry. The result is the concatenation of the
first row from each function, followed by the second row from each
function, etc; with NULLs inserted if any function produces fewer rows than
others. This is believed to be a much more useful behavior than what
Postgres currently does with multiple SRFs in a SELECT list.
This syntax also provides a reasonable way to combine use of column
definition lists with WITH ORDINALITY: put the column definition list
inside TABLE(), where it's clear that it doesn't control the ordinality
column as well.
Also implement SQL-compliant multiple-argument UNNEST(), by turning
UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
The SQL standard specifies TABLE() with only a single function, not
multiple functions, and it seems to require an implicit UNNEST() which is
not what this patch does. There may be something wrong with that reading
of the spec, though, because if it's right then the spec's TABLE() is just
a pointless alternative spelling of UNNEST(). After further review of
that, we might choose to adopt a different syntax for what this patch does,
but in any case this functionality seems clearly worthwhile.
Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
significantly revised by me
2013-11-22 01:37:02 +01:00
|
|
|
/* This is set during planning for use by the executor: */
|
|
|
|
Bitmapset *funcparams; /* PARAM_EXEC Param IDs affecting this func */
|
|
|
|
} RangeTblFunction;
|
|
|
|
|
Redesign tablesample method API, and do extensive code review.
The original implementation of TABLESAMPLE modeled the tablesample method
API on index access methods, which wasn't a good choice because, without
specialized DDL commands, there's no way to build an extension that can
implement a TSM. (Raw inserts into system catalogs are not an acceptable
thing to do, because we can't undo them during DROP EXTENSION, nor will
pg_upgrade behave sanely.) Instead adopt an API more like procedural
language handlers or foreign data wrappers, wherein the only SQL-level
support object needed is a single handler function identified by having
a special return type. This lets us get rid of the supporting catalog
altogether, so that no custom DDL support is needed for the feature.
Adjust the API so that it can support non-constant tablesample arguments
(the original coding assumed we could evaluate the argument expressions at
ExecInitSampleScan time, which is undesirable even if it weren't outright
unsafe), and discourage sampling methods from looking at invisible tuples.
Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable
within and across queries, as required by the SQL standard, and deal more
honestly with methods that can't support that requirement.
Make a full code-review pass over the tablesample additions, and fix
assorted bugs, omissions, infelicities, and cosmetic issues (such as
failure to put the added code stanzas in a consistent ordering).
Improve EXPLAIN's output of tablesample plans, too.
Back-patch to 9.5 so that we don't have to support the original API
in production.
2015-07-25 20:39:00 +02:00
|
|
|
/*
|
|
|
|
* TableSampleClause - TABLESAMPLE appearing in a transformed FROM clause
|
|
|
|
*
|
|
|
|
* Unlike RangeTableSample, this is a subnode of the relevant RangeTblEntry.
|
|
|
|
*/
|
|
|
|
typedef struct TableSampleClause
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
Oid tsmhandler; /* OID of the tablesample handler function */
|
|
|
|
List *args; /* tablesample argument expression(s) */
|
|
|
|
Expr *repeatable; /* REPEATABLE expression, or NULL if none */
|
|
|
|
} TableSampleClause;
|
|
|
|
|
2013-07-18 23:10:16 +02:00
|
|
|
/*
|
|
|
|
* WithCheckOption -
|
|
|
|
* representation of WITH CHECK OPTION checks to be applied to new tuples
|
2015-04-25 02:34:26 +02:00
|
|
|
* when inserting/updating an auto-updatable view, or RLS WITH CHECK
|
|
|
|
* policies to be applied when inserting/updating a relation with RLS.
|
2013-07-18 23:10:16 +02:00
|
|
|
*/
|
2015-04-25 02:34:26 +02:00
|
|
|
typedef enum WCOKind
|
|
|
|
{
|
|
|
|
WCO_VIEW_CHECK, /* WCO on an auto-updatable view */
|
|
|
|
WCO_RLS_INSERT_CHECK, /* RLS INSERT WITH CHECK policy */
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
WCO_RLS_UPDATE_CHECK, /* RLS UPDATE WITH CHECK policy */
|
2018-04-12 12:22:56 +02:00
|
|
|
WCO_RLS_CONFLICT_CHECK /* RLS ON CONFLICT DO UPDATE USING policy */
|
2015-04-25 02:34:26 +02:00
|
|
|
} WCOKind;
|
|
|
|
|
2013-07-18 23:10:16 +02:00
|
|
|
typedef struct WithCheckOption
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2015-04-25 02:34:26 +02:00
|
|
|
WCOKind kind; /* kind of WCO */
|
|
|
|
char *relname; /* name of relation that specified the WCO */
|
2015-09-15 21:49:31 +02:00
|
|
|
char *polname; /* name of RLS policy being checked */
|
2014-05-06 18:12:18 +02:00
|
|
|
Node *qual; /* constraint qual to check */
|
2015-04-25 02:34:26 +02:00
|
|
|
bool cascaded; /* true for a cascaded WCO on a view */
|
2013-07-18 23:10:16 +02:00
|
|
|
} WithCheckOption;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
2008-08-02 23:32:01 +02:00
|
|
|
* SortGroupClause -
|
2008-12-28 19:54:01 +01:00
|
|
|
* representation of ORDER BY, GROUP BY, PARTITION BY,
|
|
|
|
* DISTINCT, DISTINCT ON items
|
2008-08-02 23:32:01 +02:00
|
|
|
*
|
|
|
|
* You might think that ORDER BY is only interested in defining ordering,
|
|
|
|
* and GROUP/DISTINCT are only interested in defining equality. However,
|
|
|
|
* one way to implement grouping is to sort and then apply a "uniq"-like
|
2014-05-06 18:12:18 +02:00
|
|
|
* filter. So it's also interesting to keep track of possible sort operators
|
2008-08-02 23:32:01 +02:00
|
|
|
* for GROUP/DISTINCT, and in particular to try to sort for the grouping
|
|
|
|
* in a way that will also yield a requested ORDER BY ordering. So we need
|
|
|
|
* to be able to compare ORDER BY and GROUP/DISTINCT lists, which motivates
|
|
|
|
* the decision to give them the same representation.
|
2002-03-08 05:37:18 +01:00
|
|
|
*
|
2005-04-06 18:34:07 +02:00
|
|
|
* tleSortGroupRef must match ressortgroupref of exactly one entry of the
|
2008-08-02 23:32:01 +02:00
|
|
|
* query's targetlist; that is the expression to be sorted or grouped by.
|
|
|
|
* eqop is the OID of the equality operator.
|
|
|
|
* sortop is the OID of the ordering operator (a "<" or ">" operator),
|
|
|
|
* or InvalidOid if not available.
|
|
|
|
* nulls_first means about what you'd expect. If sortop is InvalidOid
|
|
|
|
* then nulls_first is meaningless and should be set to false.
|
2017-08-16 06:22:32 +02:00
|
|
|
* hashable is true if eqop is hashable (note this condition also depends
|
2010-10-31 02:55:20 +01:00
|
|
|
* on the datatype of the input expression).
|
2008-08-02 23:32:01 +02:00
|
|
|
*
|
|
|
|
* In an ORDER BY item, all fields must be valid. (The eqop isn't essential
|
|
|
|
* here, but it's cheap to get it along with the sortop, and requiring it
|
2011-03-20 01:29:08 +01:00
|
|
|
* to be valid eases comparisons to grouping items.) Note that this isn't
|
|
|
|
* actually enough information to determine an ordering: if the sortop is
|
2014-05-06 18:12:18 +02:00
|
|
|
* collation-sensitive, a collation OID is needed too. We don't store the
|
2011-03-20 01:29:08 +01:00
|
|
|
* collation in SortGroupClause because it's not available at the time the
|
|
|
|
* parser builds the SortGroupClause; instead, consult the exposed collation
|
|
|
|
* of the referenced targetlist expression to find out what it is.
|
2008-08-02 23:32:01 +02:00
|
|
|
*
|
2014-05-06 18:12:18 +02:00
|
|
|
* In a grouping item, eqop must be valid. If the eqop is a btree equality
|
2008-08-02 23:32:01 +02:00
|
|
|
* operator, then sortop should be set to a compatible ordering operator.
|
|
|
|
* We prefer to set eqop/sortop/nulls_first to match any ORDER BY item that
|
2014-05-06 18:12:18 +02:00
|
|
|
* the query presents for the same tlist item. If there is none, we just
|
2008-08-02 23:32:01 +02:00
|
|
|
* use the default ordering op for the datatype.
|
|
|
|
*
|
|
|
|
* If the tlist item's type has a hash opclass but no btree opclass, then
|
|
|
|
* we will set eqop to the hash equality operator, sortop to InvalidOid,
|
|
|
|
* and nulls_first to false. A grouping item of this kind can only be
|
|
|
|
* implemented by hashing, and of course it'll never match an ORDER BY item.
|
|
|
|
*
|
2010-10-31 02:55:20 +01:00
|
|
|
* The hashable flag is provided since we generally have the requisite
|
|
|
|
* information readily available when the SortGroupClause is constructed,
|
|
|
|
* and it's relatively expensive to get it again later. Note there is no
|
|
|
|
* need for a "sortable" flag since OidIsValid(sortop) serves the purpose.
|
|
|
|
*
|
2008-08-02 23:32:01 +02:00
|
|
|
* A query might have both ORDER BY and DISTINCT (or DISTINCT ON) clauses.
|
|
|
|
* In SELECT DISTINCT, the distinctClause list is as long or longer than the
|
|
|
|
* sortClause list, while in SELECT DISTINCT ON it's typically shorter.
|
|
|
|
* The two lists must match up to the end of the shorter one --- the parser
|
|
|
|
* rearranges the distinctClause if necessary to make this true. (This
|
|
|
|
* restriction ensures that only one sort step is needed to both satisfy the
|
|
|
|
* ORDER BY and set up for the Unique step. This is semantically necessary
|
|
|
|
* for DISTINCT ON, and presents no real drawback for DISTINCT.)
|
2002-03-08 05:37:18 +01:00
|
|
|
*/
|
2008-08-02 23:32:01 +02:00
|
|
|
typedef struct SortGroupClause
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-08 05:37:18 +01:00
|
|
|
Index tleSortGroupRef; /* reference into targetlist */
|
2009-06-11 16:49:15 +02:00
|
|
|
Oid eqop; /* the equality operator ('=' op) */
|
|
|
|
Oid sortop; /* the ordering operator ('<' op), or 0 */
|
|
|
|
bool nulls_first; /* do NULLs come before normal values? */
|
2010-10-31 02:55:20 +01:00
|
|
|
bool hashable; /* can eqop be implemented by hashing? */
|
2008-08-02 23:32:01 +02:00
|
|
|
} SortGroupClause;
|
2002-03-08 05:37:18 +01:00
|
|
|
|
Support GROUPING SETS, CUBE and ROLLUP.
This SQL standard functionality allows to aggregate data by different
GROUP BY clauses at once. Each grouping set returns rows with columns
grouped by in other sets set to NULL.
This could previously be achieved by doing each grouping as a separate
query, conjoined by UNION ALLs. Besides being considerably more concise,
grouping sets will in many cases be faster, requiring only one scan over
the underlying data.
The current implementation of grouping sets only supports using sorting
for input. Individual sets that share a sort order are computed in one
pass. If there are sets that don't share a sort order, additional sort &
aggregation steps are performed. These additional passes are sourced by
the previous sort step; thus avoiding repeated scans of the source data.
The code is structured in a way that adding support for purely using
hash aggregation or a mix of hashing and sorting is possible. Sorting
was chosen to be supported first, as it is the most generic method of
implementation.
Instead of, as in an earlier versions of the patch, representing the
chain of sort and aggregation steps as full blown planner and executor
nodes, all but the first sort are performed inside the aggregation node
itself. This avoids the need to do some unusual gymnastics to handle
having to return aggregated and non-aggregated tuples from underlying
nodes, as well as having to shut down underlying nodes early to limit
memory usage. The optimizer still builds Sort/Agg node to describe each
phase, but they're not part of the plan tree, but instead additional
data for the aggregation node. They're a convenient and preexisting way
to describe aggregation and sorting. The first (and possibly only) sort
step is still performed as a separate execution step. That retains
similarity with existing group by plans, makes rescans fairly simple,
avoids very deep plans (leading to slow explains) and easily allows to
avoid the sorting step if the underlying data is sorted by other means.
A somewhat ugly side of this patch is having to deal with a grammar
ambiguity between the new CUBE keyword and the cube extension/functions
named cube (and rollup). To avoid breaking existing deployments of the
cube extension it has not been renamed, neither has cube been made a
reserved keyword. Instead precedence hacking is used to make GROUP BY
cube(..) refer to the CUBE grouping sets feature, and not the function
cube(). To actually group by a function cube(), unlikely as that might
be, the function name has to be quoted.
Needs a catversion bump because stored rules may change.
Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund
Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule
Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
2015-05-16 03:40:59 +02:00
|
|
|
/*
|
|
|
|
* GroupingSet -
|
|
|
|
* representation of CUBE, ROLLUP and GROUPING SETS clauses
|
|
|
|
*
|
|
|
|
* In a Query with grouping sets, the groupClause contains a flat list of
|
|
|
|
* SortGroupClause nodes for each distinct expression used. The actual
|
|
|
|
* structure of the GROUP BY clause is given by the groupingSets tree.
|
|
|
|
*
|
|
|
|
* In the raw parser output, GroupingSet nodes (of all types except SIMPLE
|
|
|
|
* which is not used) are potentially mixed in with the expressions in the
|
|
|
|
* groupClause of the SelectStmt. (An expression can't contain a GroupingSet,
|
|
|
|
* but a list may mix GroupingSet and expression nodes.) At this stage, the
|
|
|
|
* content of each node is a list of expressions, some of which may be RowExprs
|
|
|
|
* which represent sublists rather than actual row constructors, and nested
|
|
|
|
* GroupingSet nodes where legal in the grammar. The structure directly
|
|
|
|
* reflects the query syntax.
|
|
|
|
*
|
|
|
|
* In parse analysis, the transformed expressions are used to build the tlist
|
|
|
|
* and groupClause list (of SortGroupClause nodes), and the groupingSets tree
|
|
|
|
* is eventually reduced to a fixed format:
|
|
|
|
*
|
|
|
|
* EMPTY nodes represent (), and obviously have no content
|
|
|
|
*
|
|
|
|
* SIMPLE nodes represent a list of one or more expressions to be treated as an
|
|
|
|
* atom by the enclosing structure; the content is an integer list of
|
|
|
|
* ressortgroupref values (see SortGroupClause)
|
|
|
|
*
|
|
|
|
* CUBE and ROLLUP nodes contain a list of one or more SIMPLE nodes.
|
|
|
|
*
|
|
|
|
* SETS nodes contain a list of EMPTY, SIMPLE, CUBE or ROLLUP nodes, but after
|
|
|
|
* parse analysis they cannot contain more SETS nodes; enough of the syntactic
|
|
|
|
* transforms of the spec have been applied that we no longer have arbitrarily
|
|
|
|
* deep nesting (though we still preserve the use of cube/rollup).
|
|
|
|
*
|
|
|
|
* Note that if the groupingSets tree contains no SIMPLE nodes (only EMPTY
|
|
|
|
* nodes at the leaves), then the groupClause will be empty, but this is still
|
|
|
|
* an aggregation query (similar to using aggs or HAVING without GROUP BY).
|
|
|
|
*
|
|
|
|
* As an example, the following clause:
|
|
|
|
*
|
|
|
|
* GROUP BY GROUPING SETS ((a,b), CUBE(c,(d,e)))
|
|
|
|
*
|
|
|
|
* looks like this after raw parsing:
|
|
|
|
*
|
|
|
|
* SETS( RowExpr(a,b) , CUBE( c, RowExpr(d,e) ) )
|
|
|
|
*
|
|
|
|
* and parse analysis converts it to:
|
|
|
|
*
|
|
|
|
* SETS( SIMPLE(1,2), CUBE( SIMPLE(3), SIMPLE(4,5) ) )
|
|
|
|
*/
|
|
|
|
typedef enum
|
|
|
|
{
|
|
|
|
GROUPING_SET_EMPTY,
|
|
|
|
GROUPING_SET_SIMPLE,
|
|
|
|
GROUPING_SET_ROLLUP,
|
|
|
|
GROUPING_SET_CUBE,
|
|
|
|
GROUPING_SET_SETS
|
|
|
|
} GroupingSetKind;
|
|
|
|
|
|
|
|
typedef struct GroupingSet
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
GroupingSetKind kind;
|
|
|
|
List *content;
|
|
|
|
int location;
|
|
|
|
} GroupingSet;
|
|
|
|
|
2008-12-28 19:54:01 +01:00
|
|
|
/*
|
|
|
|
* WindowClause -
|
|
|
|
* transformed representation of WINDOW and OVER clauses
|
|
|
|
*
|
|
|
|
* A parsed Query's windowClause list contains these structs. "name" is set
|
|
|
|
* if the clause originally came from WINDOW, and is NULL if it originally
|
|
|
|
* was an OVER clause (but note that we collapse out duplicate OVERs).
|
|
|
|
* partitionClause and orderClause are lists of SortGroupClause structs.
|
Support all SQL:2011 options for window frame clauses.
This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING"
frame boundaries in window functions. We'd punted on that back in the
original patch to add window functions, because it was not clear how to
do it in a reasonably data-type-extensible fashion. That problem is
resolved here by adding the ability for btree operator classes to provide
an "in_range" support function that defines how to add or subtract the
RANGE offset value. Factoring it this way also allows the operator class
to avoid overflow problems near the ends of the datatype's range, if it
wishes to expend effort on that. (In the committed patch, the integer
opclasses handle that issue, but it did not seem worth the trouble to
avoid overflow failures for datetime types.)
The patch includes in_range support for the integer_ops opfamily
(int2/int4/int8) as well as the standard datetime types. Support for
other numeric types has been requested, but that seems like suitable
material for a follow-on patch.
In addition, the patch adds GROUPS mode which counts the offset in
ORDER-BY peer groups rather than rows, and it adds the frame_exclusion
options specified by SQL:2011. As far as I can see, we are now fully
up to spec on window framing options.
Existing behaviors remain unchanged, except that I changed the errcode
for a couple of existing error reports to meet the SQL spec's expectation
that negative "offset" values should be reported as SQLSTATE 22013.
Internally and in relevant parts of the documentation, we now consistently
use the terminology "offset PRECEDING/FOLLOWING" rather than "value
PRECEDING/FOLLOWING", since the term "value" is confusingly vague.
Oliver Ford, reviewed and whacked around some by me
Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com
2018-02-07 06:06:50 +01:00
|
|
|
* If we have RANGE with offset PRECEDING/FOLLOWING, the semantics of that are
|
|
|
|
* specified by startInRangeFunc/inRangeColl/inRangeAsc/inRangeNullsFirst
|
|
|
|
* for the start offset, or endInRangeFunc/inRange* for the end offset.
|
2008-12-28 19:54:01 +01:00
|
|
|
* winref is an ID number referenced by WindowFunc nodes; it must be unique
|
|
|
|
* among the members of a Query's windowClause list.
|
|
|
|
* When refname isn't null, the partitionClause is always copied from there;
|
2008-12-31 01:08:39 +01:00
|
|
|
* the orderClause might or might not be copied (see copiedOrder); the framing
|
|
|
|
* options are never copied, per spec.
|
2008-12-28 19:54:01 +01:00
|
|
|
*/
|
|
|
|
typedef struct WindowClause
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2009-06-11 16:49:15 +02:00
|
|
|
char *name; /* window name (NULL in an OVER clause) */
|
|
|
|
char *refname; /* referenced window name, if any */
|
2008-12-28 19:54:01 +01:00
|
|
|
List *partitionClause; /* PARTITION BY list */
|
2009-06-11 16:49:15 +02:00
|
|
|
List *orderClause; /* ORDER BY list */
|
|
|
|
int frameOptions; /* frame_clause options, see WindowDef */
|
2010-02-12 18:33:21 +01:00
|
|
|
Node *startOffset; /* expression for starting bound, if any */
|
|
|
|
Node *endOffset; /* expression for ending bound, if any */
|
Support all SQL:2011 options for window frame clauses.
This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING"
frame boundaries in window functions. We'd punted on that back in the
original patch to add window functions, because it was not clear how to
do it in a reasonably data-type-extensible fashion. That problem is
resolved here by adding the ability for btree operator classes to provide
an "in_range" support function that defines how to add or subtract the
RANGE offset value. Factoring it this way also allows the operator class
to avoid overflow problems near the ends of the datatype's range, if it
wishes to expend effort on that. (In the committed patch, the integer
opclasses handle that issue, but it did not seem worth the trouble to
avoid overflow failures for datetime types.)
The patch includes in_range support for the integer_ops opfamily
(int2/int4/int8) as well as the standard datetime types. Support for
other numeric types has been requested, but that seems like suitable
material for a follow-on patch.
In addition, the patch adds GROUPS mode which counts the offset in
ORDER-BY peer groups rather than rows, and it adds the frame_exclusion
options specified by SQL:2011. As far as I can see, we are now fully
up to spec on window framing options.
Existing behaviors remain unchanged, except that I changed the errcode
for a couple of existing error reports to meet the SQL spec's expectation
that negative "offset" values should be reported as SQLSTATE 22013.
Internally and in relevant parts of the documentation, we now consistently
use the terminology "offset PRECEDING/FOLLOWING" rather than "value
PRECEDING/FOLLOWING", since the term "value" is confusingly vague.
Oliver Ford, reviewed and whacked around some by me
Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com
2018-02-07 06:06:50 +01:00
|
|
|
Oid startInRangeFunc; /* in_range function for startOffset */
|
|
|
|
Oid endInRangeFunc; /* in_range function for endOffset */
|
|
|
|
Oid inRangeColl; /* collation for in_range tests */
|
|
|
|
bool inRangeAsc; /* use ASC sort order for in_range tests? */
|
|
|
|
bool inRangeNullsFirst; /* nulls sort first for in_range tests? */
|
2009-06-11 16:49:15 +02:00
|
|
|
Index winref; /* ID referenced by window functions */
|
|
|
|
bool copiedOrder; /* did we copy orderClause from refname? */
|
2008-12-28 19:54:01 +01:00
|
|
|
} WindowClause;
|
|
|
|
|
2006-04-30 20:30:40 +02:00
|
|
|
/*
|
|
|
|
* RowMarkClause -
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
* parser output representation of FOR [KEY] UPDATE/SHARE clauses
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
*
|
|
|
|
* Query.rowMarks contains a separate RowMarkClause node for each relation
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
* identified as a FOR [KEY] UPDATE/SHARE target. If one of these clauses
|
|
|
|
* is applied to a subquery, we generate RowMarkClauses for all normal and
|
|
|
|
* subquery rels in the subquery, but they are marked pushedDown = true to
|
|
|
|
* distinguish them from clauses that were explicitly written at this query
|
|
|
|
* level. Also, Query.hasForUpdate tells whether there were explicit FOR
|
|
|
|
* UPDATE/SHARE/KEY SHARE clauses in the current query level.
|
2006-04-30 20:30:40 +02:00
|
|
|
*/
|
|
|
|
typedef struct RowMarkClause
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
Index rti; /* range table index of target relation */
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
LockClauseStrength strength;
|
2014-10-07 22:23:34 +02:00
|
|
|
LockWaitPolicy waitPolicy; /* NOWAIT and SKIP LOCKED */
|
2009-10-28 15:55:47 +01:00
|
|
|
bool pushedDown; /* pushed down from higher query level? */
|
2006-04-30 20:30:40 +02:00
|
|
|
} RowMarkClause;
|
2002-03-08 05:37:18 +01:00
|
|
|
|
2008-10-04 23:56:55 +02:00
|
|
|
/*
|
|
|
|
* WithClause -
|
2009-06-11 16:49:15 +02:00
|
|
|
* representation of WITH clause
|
2008-10-04 23:56:55 +02:00
|
|
|
*
|
|
|
|
* Note: WithClause does not propagate into the Query representation;
|
|
|
|
* but CommonTableExpr does.
|
|
|
|
*/
|
|
|
|
typedef struct WithClause
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *ctes; /* list of CommonTableExprs */
|
|
|
|
bool recursive; /* true = WITH RECURSIVE */
|
|
|
|
int location; /* token location, or -1 if unknown */
|
|
|
|
} WithClause;
|
|
|
|
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
/*
|
|
|
|
* InferClause -
|
|
|
|
* ON CONFLICT unique index inference clause
|
|
|
|
*
|
|
|
|
* Note: InferClause does not propagate into the Query representation.
|
|
|
|
*/
|
|
|
|
typedef struct InferClause
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *indexElems; /* IndexElems to infer unique index */
|
|
|
|
Node *whereClause; /* qualification (partial-index predicate) */
|
|
|
|
char *conname; /* Constraint name, or NULL if unnamed */
|
|
|
|
int location; /* token location, or -1 if unknown */
|
|
|
|
} InferClause;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* OnConflictClause -
|
|
|
|
* representation of ON CONFLICT clause
|
|
|
|
*
|
|
|
|
* Note: OnConflictClause does not propagate into the Query representation.
|
|
|
|
*/
|
|
|
|
typedef struct OnConflictClause
|
|
|
|
{
|
2015-05-24 03:35:49 +02:00
|
|
|
NodeTag type;
|
|
|
|
OnConflictAction action; /* DO NOTHING or UPDATE? */
|
|
|
|
InferClause *infer; /* Optional index inference clause */
|
|
|
|
List *targetList; /* the target list (of ResTarget) */
|
|
|
|
Node *whereClause; /* qualifications */
|
|
|
|
int location; /* token location, or -1 if unknown */
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
} OnConflictClause;
|
|
|
|
|
2008-10-04 23:56:55 +02:00
|
|
|
/*
|
|
|
|
* CommonTableExpr -
|
2009-06-11 16:49:15 +02:00
|
|
|
* representation of WITH list element
|
2008-10-04 23:56:55 +02:00
|
|
|
*
|
|
|
|
* We don't currently support the SEARCH or CYCLE clause.
|
|
|
|
*/
|
Allow user control of CTE materialization, and change the default behavior.
Historically we've always materialized the full output of a CTE query,
treating WITH as an optimization fence (so that, for example, restrictions
from the outer query cannot be pushed into it). This is appropriate when
the CTE query is INSERT/UPDATE/DELETE, or is recursive; but when the CTE
query is non-recursive and side-effect-free, there's no hazard of changing
the query results by pushing restrictions down.
Another argument for materialization is that it can avoid duplicate
computation of an expensive WITH query --- but that only applies if
the WITH query is called more than once in the outer query. Even then
it could still be a net loss, if each call has restrictions that
would allow just a small part of the WITH query to be computed.
Hence, let's change the behavior for WITH queries that are non-recursive
and side-effect-free. By default, we will inline them into the outer
query (removing the optimization fence) if they are called just once.
If they are called more than once, we will keep the old behavior by
default, but the user can override this and force inlining by specifying
NOT MATERIALIZED. Lastly, the user can force the old behavior by
specifying MATERIALIZED; this would mainly be useful when the query had
deliberately been employing WITH as an optimization fence to prevent a
poor choice of plan.
Andreas Karlsson, Andrew Gierth, David Fetter
Discussion: https://postgr.es/m/87sh48ffhb.fsf@news-spur.riddles.org.uk
2019-02-16 22:11:12 +01:00
|
|
|
typedef enum CTEMaterialize
|
|
|
|
{
|
|
|
|
CTEMaterializeDefault, /* no option specified */
|
|
|
|
CTEMaterializeAlways, /* MATERIALIZED */
|
|
|
|
CTEMaterializeNever /* NOT MATERIALIZED */
|
|
|
|
} CTEMaterialize;
|
|
|
|
|
2008-10-04 23:56:55 +02:00
|
|
|
typedef struct CommonTableExpr
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *ctename; /* query name (never qualified) */
|
|
|
|
List *aliascolnames; /* optional list of column names */
|
Allow user control of CTE materialization, and change the default behavior.
Historically we've always materialized the full output of a CTE query,
treating WITH as an optimization fence (so that, for example, restrictions
from the outer query cannot be pushed into it). This is appropriate when
the CTE query is INSERT/UPDATE/DELETE, or is recursive; but when the CTE
query is non-recursive and side-effect-free, there's no hazard of changing
the query results by pushing restrictions down.
Another argument for materialization is that it can avoid duplicate
computation of an expensive WITH query --- but that only applies if
the WITH query is called more than once in the outer query. Even then
it could still be a net loss, if each call has restrictions that
would allow just a small part of the WITH query to be computed.
Hence, let's change the behavior for WITH queries that are non-recursive
and side-effect-free. By default, we will inline them into the outer
query (removing the optimization fence) if they are called just once.
If they are called more than once, we will keep the old behavior by
default, but the user can override this and force inlining by specifying
NOT MATERIALIZED. Lastly, the user can force the old behavior by
specifying MATERIALIZED; this would mainly be useful when the query had
deliberately been employing WITH as an optimization fence to prevent a
poor choice of plan.
Andreas Karlsson, Andrew Gierth, David Fetter
Discussion: https://postgr.es/m/87sh48ffhb.fsf@news-spur.riddles.org.uk
2019-02-16 22:11:12 +01:00
|
|
|
CTEMaterialize ctematerialized; /* is this an optimization fence? */
|
2011-02-26 00:56:23 +01:00
|
|
|
/* SelectStmt/InsertStmt/etc before parse analysis, Query afterwards: */
|
|
|
|
Node *ctequery; /* the CTE's subquery */
|
2008-10-04 23:56:55 +02:00
|
|
|
int location; /* token location, or -1 if unknown */
|
|
|
|
/* These fields are set during parse analysis: */
|
|
|
|
bool cterecursive; /* is this CTE actually recursive? */
|
|
|
|
int cterefcount; /* number of RTEs referencing this CTE
|
|
|
|
* (excluding internal self-references) */
|
|
|
|
List *ctecolnames; /* list of output column names */
|
|
|
|
List *ctecoltypes; /* OID list of output column type OIDs */
|
|
|
|
List *ctecoltypmods; /* integer list of output column typmods */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
List *ctecolcollations; /* OID list of column collation OIDs */
|
2008-10-04 23:56:55 +02:00
|
|
|
} CommonTableExpr;
|
|
|
|
|
2011-02-26 00:56:23 +01:00
|
|
|
/* Convenience macro to get the output tlist of a CTE's query */
|
|
|
|
#define GetCTETargetList(cte) \
|
|
|
|
(AssertMacro(IsA((cte)->ctequery, Query)), \
|
|
|
|
((Query *) (cte)->ctequery)->commandType == CMD_SELECT ? \
|
|
|
|
((Query *) (cte)->ctequery)->targetList : \
|
|
|
|
((Query *) (cte)->ctequery)->returningList)
|
|
|
|
|
2016-11-04 16:49:50 +01:00
|
|
|
/*
|
|
|
|
* TriggerTransition -
|
|
|
|
* representation of transition row or table naming clause
|
|
|
|
*
|
|
|
|
* Only transition tables are initially supported in the syntax, and only for
|
|
|
|
* AFTER triggers, but other permutations are accepted by the parser so we can
|
|
|
|
* give a meaningful message from C code.
|
|
|
|
*/
|
|
|
|
typedef struct TriggerTransition
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *name;
|
|
|
|
bool isNew;
|
|
|
|
bool isTable;
|
|
|
|
} TriggerTransition;
|
2011-02-26 00:56:23 +01:00
|
|
|
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
|
|
|
/*****************************************************************************
|
|
|
|
* Raw Grammar Output Statements
|
|
|
|
*****************************************************************************/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* RawStmt --- container for any one statement's raw parse tree
|
|
|
|
*
|
|
|
|
* Parse analysis converts a raw parse tree headed by a RawStmt node into
|
|
|
|
* an analyzed statement headed by a Query node. For optimizable statements,
|
|
|
|
* the conversion is complex. For utility statements, the parser usually just
|
|
|
|
* transfers the raw parse tree (sans RawStmt) into the utilityStmt field of
|
|
|
|
* the Query node, and all the useful work happens at execution time.
|
|
|
|
*
|
|
|
|
* stmt_location/stmt_len identify the portion of the source text string
|
|
|
|
* containing this raw statement (useful for multi-statement strings).
|
|
|
|
*/
|
|
|
|
typedef struct RawStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
Node *stmt; /* raw parse tree */
|
|
|
|
int stmt_location; /* start location, or -1 if unknown */
|
|
|
|
int stmt_len; /* length in bytes; 0 means "rest of string" */
|
|
|
|
} RawStmt;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*****************************************************************************
|
|
|
|
* Optimizable Statements
|
|
|
|
*****************************************************************************/
|
This patch implements ORACLE's COMMENT SQL command.
>From the ORACLE 7 SQL Language Reference Manual:
-----------------------------------------------------
COMMENT
Purpose:
To add a comment about a table, view, snapshot, or
column into the data dictionary.
Prerequisites:
The table, view, or snapshot must be in your own
schema
or you must have COMMENT ANY TABLE system privilege.
Syntax:
COMMENT ON [ TABLE table ] |
[ COLUMN table.column] IS 'text'
You can effectively drop a comment from the database
by setting it to the empty string ''.
-----------------------------------------------------
Example:
COMMENT ON TABLE workorders IS
'Maintains base records for workorder information';
COMMENT ON COLUMN workorders.hours IS
'Number of hours the engineer worked on the task';
to drop a comment:
COMMENT ON COLUMN workorders.hours IS '';
The current patch will simply perform the insert into
pg_description, as per the TODO. And, of course, when
the table is dropped, any comments relating to it
or any of its attributes are also dropped. I haven't
looked at the ODBC source yet, but I do know from
an ODBC client standpoint that the standard does
support the notion of table and column comments.
Hopefully the ODBC driver is already fetching these
values from pg_description, but if not, it should be
trivial.
Hope this makes the grade,
Mike Mascari
(mascarim@yahoo.com)
1999-10-15 03:49:49 +02:00
|
|
|
|
|
|
|
/* ----------------------
|
2002-03-08 05:37:18 +01:00
|
|
|
* Insert Statement
|
2006-08-02 03:59:48 +02:00
|
|
|
*
|
|
|
|
* The source expression is represented by SelectStmt for both the
|
|
|
|
* SELECT and VALUES cases. If selectStmt is NULL, then the query
|
|
|
|
* is INSERT ... DEFAULT VALUES.
|
This patch implements ORACLE's COMMENT SQL command.
>From the ORACLE 7 SQL Language Reference Manual:
-----------------------------------------------------
COMMENT
Purpose:
To add a comment about a table, view, snapshot, or
column into the data dictionary.
Prerequisites:
The table, view, or snapshot must be in your own
schema
or you must have COMMENT ANY TABLE system privilege.
Syntax:
COMMENT ON [ TABLE table ] |
[ COLUMN table.column] IS 'text'
You can effectively drop a comment from the database
by setting it to the empty string ''.
-----------------------------------------------------
Example:
COMMENT ON TABLE workorders IS
'Maintains base records for workorder information';
COMMENT ON COLUMN workorders.hours IS
'Number of hours the engineer worked on the task';
to drop a comment:
COMMENT ON COLUMN workorders.hours IS '';
The current patch will simply perform the insert into
pg_description, as per the TODO. And, of course, when
the table is dropped, any comments relating to it
or any of its attributes are also dropped. I haven't
looked at the ODBC source yet, but I do know from
an ODBC client standpoint that the standard does
support the notion of table and column comments.
Hopefully the ODBC driver is already fetching these
values from pg_description, but if not, it should be
trivial.
Hope this makes the grade,
Mike Mascari
(mascarim@yahoo.com)
1999-10-15 03:49:49 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct InsertStmt
|
This patch implements ORACLE's COMMENT SQL command.
>From the ORACLE 7 SQL Language Reference Manual:
-----------------------------------------------------
COMMENT
Purpose:
To add a comment about a table, view, snapshot, or
column into the data dictionary.
Prerequisites:
The table, view, or snapshot must be in your own
schema
or you must have COMMENT ANY TABLE system privilege.
Syntax:
COMMENT ON [ TABLE table ] |
[ COLUMN table.column] IS 'text'
You can effectively drop a comment from the database
by setting it to the empty string ''.
-----------------------------------------------------
Example:
COMMENT ON TABLE workorders IS
'Maintains base records for workorder information';
COMMENT ON COLUMN workorders.hours IS
'Number of hours the engineer worked on the task';
to drop a comment:
COMMENT ON COLUMN workorders.hours IS '';
The current patch will simply perform the insert into
pg_description, as per the TODO. And, of course, when
the table is dropped, any comments relating to it
or any of its attributes are also dropped. I haven't
looked at the ODBC source yet, but I do know from
an ODBC client standpoint that the standard does
support the notion of table and column comments.
Hopefully the ODBC driver is already fetching these
values from pg_description, but if not, it should be
trivial.
Hope this makes the grade,
Mike Mascari
(mascarim@yahoo.com)
1999-10-15 03:49:49 +02:00
|
|
|
{
|
2000-04-12 19:17:23 +02:00
|
|
|
NodeTag type;
|
2002-03-21 17:02:16 +01:00
|
|
|
RangeVar *relation; /* relation to insert into */
|
2002-03-08 05:37:18 +01:00
|
|
|
List *cols; /* optional: names of the target columns */
|
2006-08-02 03:59:48 +02:00
|
|
|
Node *selectStmt; /* the source SELECT/VALUES, or NULL */
|
2015-05-24 03:35:49 +02:00
|
|
|
OnConflictClause *onConflictClause; /* ON CONFLICT clause */
|
2006-08-12 04:52:06 +02:00
|
|
|
List *returningList; /* list of expressions to return */
|
2010-10-16 01:53:59 +02:00
|
|
|
WithClause *withClause; /* WITH clause */
|
2017-04-06 14:33:16 +02:00
|
|
|
OverridingKind override; /* OVERRIDING clause */
|
2002-03-08 05:37:18 +01:00
|
|
|
} InsertStmt;
|
2000-04-12 19:17:23 +02:00
|
|
|
|
1996-08-28 03:59:28 +02:00
|
|
|
/* ----------------------
|
2002-03-08 05:37:18 +01:00
|
|
|
* Delete Statement
|
1996-08-28 03:59:28 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct DeleteStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-21 17:02:16 +01:00
|
|
|
RangeVar *relation; /* relation to delete from */
|
2005-04-07 03:51:41 +02:00
|
|
|
List *usingClause; /* optional using clause for more tables */
|
2006-08-12 04:52:06 +02:00
|
|
|
Node *whereClause; /* qualifications */
|
|
|
|
List *returningList; /* list of expressions to return */
|
2010-10-16 01:53:59 +02:00
|
|
|
WithClause *withClause; /* WITH clause */
|
2002-03-08 05:37:18 +01:00
|
|
|
} DeleteStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
|
|
|
/* ----------------------
|
2002-03-08 05:37:18 +01:00
|
|
|
* Update Statement
|
1996-08-28 03:59:28 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct UpdateStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-21 17:02:16 +01:00
|
|
|
RangeVar *relation; /* relation to update */
|
2002-03-08 05:37:18 +01:00
|
|
|
List *targetList; /* the target list (of ResTarget) */
|
|
|
|
Node *whereClause; /* qualifications */
|
2002-03-21 17:02:16 +01:00
|
|
|
List *fromClause; /* optional from clause for more tables */
|
2006-08-12 04:52:06 +02:00
|
|
|
List *returningList; /* list of expressions to return */
|
2010-10-16 01:53:59 +02:00
|
|
|
WithClause *withClause; /* WITH clause */
|
2002-03-08 05:37:18 +01:00
|
|
|
} UpdateStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
|
|
|
/* ----------------------
|
2002-03-08 05:37:18 +01:00
|
|
|
* Select Statement
|
|
|
|
*
|
|
|
|
* A "simple" SELECT is represented in the output of gram.y by a single
|
2006-08-02 03:59:48 +02:00
|
|
|
* SelectStmt node; so is a VALUES construct. A query containing set
|
|
|
|
* operators (UNION, INTERSECT, EXCEPT) is represented by a tree of SelectStmt
|
|
|
|
* nodes, in which the leaf nodes are component SELECTs and the internal nodes
|
2002-03-08 05:37:18 +01:00
|
|
|
* represent UNION, INTERSECT, or EXCEPT operators. Using the same node
|
|
|
|
* type for both leaf and internal nodes allows gram.y to stick ORDER BY,
|
|
|
|
* LIMIT, etc, clause values into a SELECT statement without worrying
|
|
|
|
* whether it is a simple or compound SELECT.
|
1996-08-28 03:59:28 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef enum SetOperation
|
|
|
|
{
|
|
|
|
SETOP_NONE = 0,
|
|
|
|
SETOP_UNION,
|
|
|
|
SETOP_INTERSECT,
|
|
|
|
SETOP_EXCEPT
|
|
|
|
} SetOperation;
|
|
|
|
|
|
|
|
typedef struct SelectStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-08 05:37:18 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* These fields are used only in "leaf" SelectStmts.
|
|
|
|
*/
|
|
|
|
List *distinctClause; /* NULL, list of DISTINCT ON exprs, or
|
2005-10-15 04:49:52 +02:00
|
|
|
* lcons(NIL,NIL) for all (SELECT DISTINCT) */
|
Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements. The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.
In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs. Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.
Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.
Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.
Andres Freund and Tom Lane
2012-03-20 02:37:19 +01:00
|
|
|
IntoClause *intoClause; /* target for SELECT INTO */
|
2002-03-08 05:37:18 +01:00
|
|
|
List *targetList; /* the target list (of ResTarget) */
|
|
|
|
List *fromClause; /* the FROM clause */
|
|
|
|
Node *whereClause; /* WHERE qualification */
|
|
|
|
List *groupClause; /* GROUP BY clauses */
|
|
|
|
Node *havingClause; /* HAVING conditional-expression */
|
2008-12-28 19:54:01 +01:00
|
|
|
List *windowClause; /* WINDOW window_name AS (...), ... */
|
2002-03-08 05:37:18 +01:00
|
|
|
|
2006-08-02 03:59:48 +02:00
|
|
|
/*
|
|
|
|
* In a "leaf" node representing a VALUES list, the above fields are all
|
2006-10-04 02:30:14 +02:00
|
|
|
* null, and instead this field is set. Note that the elements of the
|
|
|
|
* sublists are just expressions, without ResTarget decoration. Also note
|
|
|
|
* that a list element can be DEFAULT (represented as a SetToDefault
|
|
|
|
* node), regardless of the context of the VALUES list. It's up to parse
|
|
|
|
* analysis to reject that where not valid.
|
2006-08-02 03:59:48 +02:00
|
|
|
*/
|
|
|
|
List *valuesLists; /* untransformed list of expression lists */
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/*
|
|
|
|
* These fields are used in both "leaf" SelectStmts and upper-level
|
2003-03-10 04:53:52 +01:00
|
|
|
* SelectStmts.
|
2002-03-08 05:37:18 +01:00
|
|
|
*/
|
2003-08-17 21:58:06 +02:00
|
|
|
List *sortClause; /* sort clause (a list of SortBy's) */
|
2002-03-08 05:37:18 +01:00
|
|
|
Node *limitOffset; /* # of result tuples to skip */
|
|
|
|
Node *limitCount; /* # of result tuples to return */
|
2006-04-30 20:30:40 +02:00
|
|
|
List *lockingClause; /* FOR UPDATE (list of LockingClause's) */
|
2012-07-31 23:56:21 +02:00
|
|
|
WithClause *withClause; /* WITH clause */
|
2002-03-08 05:37:18 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* These fields are used only in upper-level SelectStmts.
|
|
|
|
*/
|
|
|
|
SetOperation op; /* type of set op */
|
|
|
|
bool all; /* ALL specified? */
|
|
|
|
struct SelectStmt *larg; /* left child */
|
|
|
|
struct SelectStmt *rarg; /* right child */
|
|
|
|
/* Eventually add fields for CORRESPONDING spec here */
|
|
|
|
} SelectStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2005-08-01 22:31:16 +02:00
|
|
|
|
1996-08-28 03:59:28 +02:00
|
|
|
/* ----------------------
|
2002-03-08 05:37:18 +01:00
|
|
|
* Set Operation node for post-analysis query trees
|
|
|
|
*
|
|
|
|
* After parse analysis, a SELECT with set operations is represented by a
|
|
|
|
* top-level Query node containing the leaf SELECTs as subqueries in its
|
|
|
|
* range table. Its setOperations field shows the tree of set operations,
|
|
|
|
* with leaf SelectStmt nodes replaced by RangeTblRef nodes, and internal
|
2008-08-07 03:11:52 +02:00
|
|
|
* nodes replaced by SetOperationStmt nodes. Information about the output
|
2014-05-06 18:12:18 +02:00
|
|
|
* column types is added, too. (Note that the child nodes do not necessarily
|
2008-08-07 03:11:52 +02:00
|
|
|
* produce these types directly, but we've checked that their output types
|
|
|
|
* can be coerced to the output column type.) Also, if it's not UNION ALL,
|
|
|
|
* information about the types' sort/group semantics is provided in the form
|
|
|
|
* of a SortGroupClause list (same representation as, eg, DISTINCT).
|
2011-03-26 21:35:25 +01:00
|
|
|
* The resolved common column collations are provided too; but note that if
|
|
|
|
* it's not UNION ALL, it's okay for a column to not have a common collation,
|
|
|
|
* so a member of the colCollations list could be InvalidOid even though the
|
|
|
|
* column has a collatable type.
|
1996-08-28 03:59:28 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct SetOperationStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-08 05:37:18 +01:00
|
|
|
SetOperation op; /* type of set op */
|
|
|
|
bool all; /* ALL specified? */
|
|
|
|
Node *larg; /* left child */
|
|
|
|
Node *rarg; /* right child */
|
|
|
|
/* Eventually add fields for CORRESPONDING spec here */
|
|
|
|
|
|
|
|
/* Fields derived during parse analysis: */
|
2006-08-10 04:36:29 +02:00
|
|
|
List *colTypes; /* OID list of output column type OIDs */
|
|
|
|
List *colTypmods; /* integer list of output column typmods */
|
2011-02-08 22:04:18 +01:00
|
|
|
List *colCollations; /* OID list of output column collation OIDs */
|
2008-08-07 03:11:52 +02:00
|
|
|
List *groupClauses; /* a list of SortGroupClause's */
|
|
|
|
/* groupClauses is NIL if UNION ALL, but must be set otherwise */
|
2002-03-08 05:37:18 +01:00
|
|
|
} SetOperationStmt;
|
|
|
|
|
|
|
|
|
|
|
|
/*****************************************************************************
|
|
|
|
* Other Statements (no optimizations required)
|
|
|
|
*
|
2007-06-24 00:12:52 +02:00
|
|
|
* These are not touched by parser/analyze.c except to put them into
|
|
|
|
* the utilityStmt field of a Query. This is eventually passed to
|
|
|
|
* ProcessUtility (by-passing rewriting and planning). Some of the
|
|
|
|
* statements do need attention from parse analysis, and this is
|
|
|
|
* done by routines in parser/parse_utilcmd.c after ProcessUtility
|
|
|
|
* receives the command for execution.
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
|
|
|
* DECLARE CURSOR, EXPLAIN, and CREATE TABLE AS are special cases:
|
|
|
|
* they contain optimizable statements, which get processed normally
|
|
|
|
* by parser/analyze.c.
|
2002-03-08 05:37:18 +01:00
|
|
|
*****************************************************************************/
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2003-06-27 16:45:32 +02:00
|
|
|
/*
|
|
|
|
* When a command can act on several kinds of objects with only one
|
|
|
|
* parse structure required, use these constants to designate the
|
2011-02-09 17:55:32 +01:00
|
|
|
* object type. Note that commands typically don't support all the types.
|
2003-06-27 16:45:32 +02:00
|
|
|
*/
|
|
|
|
|
2003-08-04 02:43:34 +02:00
|
|
|
typedef enum ObjectType
|
|
|
|
{
|
2016-03-24 03:01:35 +01:00
|
|
|
OBJECT_ACCESS_METHOD,
|
2003-06-27 16:45:32 +02:00
|
|
|
OBJECT_AGGREGATE,
|
2015-03-16 16:06:34 +01:00
|
|
|
OBJECT_AMOP,
|
|
|
|
OBJECT_AMPROC,
|
2011-04-10 17:42:00 +02:00
|
|
|
OBJECT_ATTRIBUTE, /* type's attribute, when distinct from column */
|
2003-06-27 16:45:32 +02:00
|
|
|
OBJECT_CAST,
|
|
|
|
OBJECT_COLUMN,
|
2011-02-12 14:54:13 +01:00
|
|
|
OBJECT_COLLATION,
|
2003-06-27 16:45:32 +02:00
|
|
|
OBJECT_CONVERSION,
|
|
|
|
OBJECT_DATABASE,
|
2014-12-23 19:31:29 +01:00
|
|
|
OBJECT_DEFAULT,
|
2015-03-11 23:23:47 +01:00
|
|
|
OBJECT_DEFACL,
|
2003-06-27 16:45:32 +02:00
|
|
|
OBJECT_DOMAIN,
|
2014-12-23 13:06:44 +01:00
|
|
|
OBJECT_DOMCONSTRAINT,
|
2012-07-18 16:16:16 +02:00
|
|
|
OBJECT_EVENT_TRIGGER,
|
2011-02-08 22:08:41 +01:00
|
|
|
OBJECT_EXTENSION,
|
2008-12-19 17:25:19 +01:00
|
|
|
OBJECT_FDW,
|
|
|
|
OBJECT_FOREIGN_SERVER,
|
2011-01-02 05:48:11 +01:00
|
|
|
OBJECT_FOREIGN_TABLE,
|
2003-06-27 16:45:32 +02:00
|
|
|
OBJECT_FUNCTION,
|
|
|
|
OBJECT_INDEX,
|
|
|
|
OBJECT_LANGUAGE,
|
2003-11-21 23:32:49 +01:00
|
|
|
OBJECT_LARGEOBJECT,
|
2013-03-04 01:23:31 +01:00
|
|
|
OBJECT_MATVIEW,
|
2003-06-27 16:45:32 +02:00
|
|
|
OBJECT_OPCLASS,
|
|
|
|
OBJECT_OPERATOR,
|
2007-01-23 06:07:18 +01:00
|
|
|
OBJECT_OPFAMILY,
|
Row-Level Security Policies (RLS)
Building on the updatable security-barrier views work, add the
ability to define policies on tables to limit the set of rows
which are returned from a query and which are allowed to be added
to a table. Expressions defined by the policy for filtering are
added to the security barrier quals of the query, while expressions
defined to check records being added to a table are added to the
with-check options of the query.
New top-level commands are CREATE/ALTER/DROP POLICY and are
controlled by the table owner. Row Security is able to be enabled
and disabled by the owner on a per-table basis using
ALTER TABLE .. ENABLE/DISABLE ROW SECURITY.
Per discussion, ROW SECURITY is disabled on tables by default and
must be enabled for policies on the table to be used. If no
policies exist on a table with ROW SECURITY enabled, a default-deny
policy is used and no records will be visible.
By default, row security is applied at all times except for the
table owner and the superuser. A new GUC, row_security, is added
which can be set to ON, OFF, or FORCE. When set to FORCE, row
security will be applied even for the table owner and superusers.
When set to OFF, row security will be disabled when allowed and an
error will be thrown if the user does not have rights to bypass row
security.
Per discussion, pg_dump sets row_security = OFF by default to ensure
that exports and backups will have all data in the table or will
error if there are insufficient privileges to bypass row security.
A new option has been added to pg_dump, --enable-row-security, to
ask pg_dump to export with row security enabled.
A new role capability, BYPASSRLS, which can only be set by the
superuser, is added to allow other users to be able to bypass row
security using row_security = OFF.
Many thanks to the various individuals who have helped with the
design, particularly Robert Haas for his feedback.
Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean
Rasheed, with additional changes and rework by me.
Reviewers have included all of the above, Greg Smith,
Jeff McCormick, and Robert Haas.
2014-09-19 17:18:35 +02:00
|
|
|
OBJECT_POLICY,
|
2017-11-30 14:46:13 +01:00
|
|
|
OBJECT_PROCEDURE,
|
2017-01-19 18:00:00 +01:00
|
|
|
OBJECT_PUBLICATION,
|
|
|
|
OBJECT_PUBLICATION_REL,
|
2005-06-28 07:09:14 +02:00
|
|
|
OBJECT_ROLE,
|
2017-11-30 14:46:13 +01:00
|
|
|
OBJECT_ROUTINE,
|
2003-06-27 16:45:32 +02:00
|
|
|
OBJECT_RULE,
|
|
|
|
OBJECT_SCHEMA,
|
|
|
|
OBJECT_SEQUENCE,
|
2017-01-19 18:00:00 +01:00
|
|
|
OBJECT_SUBSCRIPTION,
|
Implement multivariate n-distinct coefficients
Add support for explicitly declared statistic objects (CREATE
STATISTICS), allowing collection of statistics on more complex
combinations that individual table columns. Companion commands DROP
STATISTICS and ALTER STATISTICS ... OWNER TO / SET SCHEMA / RENAME are
added too. All this DDL has been designed so that more statistic types
can be added later on, such as multivariate most-common-values and
multivariate histograms between columns of a single table, leaving room
for permitting columns on multiple tables, too, as well as expressions.
This commit only adds support for collection of n-distinct coefficient
on user-specified sets of columns in a single table. This is useful to
estimate number of distinct groups in GROUP BY and DISTINCT clauses;
estimation errors there can cause over-allocation of memory in hashed
aggregates, for instance, so it's a worthwhile problem to solve. A new
special pseudo-type pg_ndistinct is used.
(num-distinct estimation was deemed sufficiently useful by itself that
this is worthwhile even if no further statistic types are added
immediately; so much so that another version of essentially the same
functionality was submitted by Kyotaro Horiguchi:
https://postgr.es/m/20150828.173334.114731693.horiguchi.kyotaro@lab.ntt.co.jp
though this commit does not use that code.)
Author: Tomas Vondra. Some code rework by Álvaro.
Reviewed-by: Dean Rasheed, David Rowley, Kyotaro Horiguchi, Jeff Janes,
Ideriha Takeshi
Discussion: https://postgr.es/m/543AFA15.4080608@fuzzy.cz
https://postgr.es/m/20170320190220.ixlaueanxegqd5gr@alvherre.pgsql
2017-03-24 18:06:10 +01:00
|
|
|
OBJECT_STATISTIC_EXT,
|
2014-12-23 13:06:44 +01:00
|
|
|
OBJECT_TABCONSTRAINT,
|
2003-06-27 16:45:32 +02:00
|
|
|
OBJECT_TABLE,
|
2004-06-18 08:14:31 +02:00
|
|
|
OBJECT_TABLESPACE,
|
2015-04-26 16:33:14 +02:00
|
|
|
OBJECT_TRANSFORM,
|
2003-06-27 16:45:32 +02:00
|
|
|
OBJECT_TRIGGER,
|
2007-08-21 03:11:32 +02:00
|
|
|
OBJECT_TSCONFIGURATION,
|
|
|
|
OBJECT_TSDICTIONARY,
|
|
|
|
OBJECT_TSPARSER,
|
|
|
|
OBJECT_TSTEMPLATE,
|
2003-06-27 16:45:32 +02:00
|
|
|
OBJECT_TYPE,
|
2015-03-11 21:01:13 +01:00
|
|
|
OBJECT_USER_MAPPING,
|
2003-06-27 16:45:32 +02:00
|
|
|
OBJECT_VIEW
|
2003-08-08 23:42:59 +02:00
|
|
|
} ObjectType;
|
2003-06-27 16:45:32 +02:00
|
|
|
|
2002-03-21 17:02:16 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Create Schema Statement
|
|
|
|
*
|
|
|
|
* NOTE: the schemaElts list contains raw parsetrees for component statements
|
|
|
|
* of the schema, such as CREATE TABLE, GRANT, etc. These are analyzed and
|
|
|
|
* executed after the schema itself is created.
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CreateSchemaStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *schemaname; /* the name of the schema to create */
|
2016-12-28 18:00:00 +01:00
|
|
|
RoleSpec *authrole; /* the owner of the created schema */
|
2002-03-21 17:02:16 +01:00
|
|
|
List *schemaElts; /* schema components (list of parsenodes) */
|
2012-10-04 01:47:11 +02:00
|
|
|
bool if_not_exists; /* just do nothing if schema already exists? */
|
2002-03-21 17:02:16 +01:00
|
|
|
} CreateSchemaStmt;
|
|
|
|
|
2002-07-01 17:27:56 +02:00
|
|
|
typedef enum DropBehavior
|
|
|
|
{
|
|
|
|
DROP_RESTRICT, /* drop fails if any dependent objects */
|
|
|
|
DROP_CASCADE /* remove dependent objects too */
|
|
|
|
} DropBehavior;
|
|
|
|
|
1997-05-22 02:17:24 +02:00
|
|
|
/* ----------------------
|
2002-03-08 05:37:18 +01:00
|
|
|
* Alter Table
|
1997-05-22 02:17:24 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct AlterTableStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-21 17:02:16 +01:00
|
|
|
RangeVar *relation; /* table to work on */
|
2004-05-05 06:48:48 +02:00
|
|
|
List *cmds; /* list of subcommands */
|
2004-08-20 06:29:33 +02:00
|
|
|
ObjectType relkind; /* type of object */
|
2012-06-10 21:20:04 +02:00
|
|
|
bool missing_ok; /* skip error if table missing */
|
2004-05-05 06:48:48 +02:00
|
|
|
} AlterTableStmt;
|
|
|
|
|
|
|
|
typedef enum AlterTableType
|
|
|
|
{
|
|
|
|
AT_AddColumn, /* add column */
|
2011-04-04 03:52:47 +02:00
|
|
|
AT_AddColumnRecurse, /* internal to commands/tablecmds.c */
|
2008-12-07 00:22:46 +01:00
|
|
|
AT_AddColumnToView, /* implicitly via CREATE OR REPLACE VIEW */
|
2004-05-05 06:48:48 +02:00
|
|
|
AT_ColumnDefault, /* alter column default */
|
|
|
|
AT_DropNotNull, /* alter column drop not null */
|
|
|
|
AT_SetNotNull, /* alter column set not null */
|
2020-01-14 13:09:31 +01:00
|
|
|
AT_DropExpression, /* alter column drop expression */
|
Avoid order-of-execution problems with ALTER TABLE ADD PRIMARY KEY.
Up to now, DefineIndex() was responsible for adding attnotnull constraints
to the columns of a primary key, in any case where it hadn't been
convenient for transformIndexConstraint() to mark those columns as
is_not_null. It (or rather its minion index_check_primary_key) did this
by executing an ALTER TABLE SET NOT NULL command for the target table.
The trouble with this solution is that if we're creating the index due
to ALTER TABLE ADD PRIMARY KEY, and the outer ALTER TABLE has additional
sub-commands, the inner ALTER TABLE's operations executed at the wrong
time with respect to the outer ALTER TABLE's operations. In particular,
the inner ALTER would perform a validation scan at a point where the
table's storage might be inconsistent with its catalog entries. (This is
on the hairy edge of being a security problem, but AFAICS it isn't one
because the inner scan would only be interested in the tuples' null
bitmaps.) This can result in unexpected failures, such as the one seen
in bug #15580 from Allison Kaptur.
To fix, let's remove the attempt to do SET NOT NULL from DefineIndex(),
reducing index_check_primary_key's role to verifying that the columns are
already not null. (It shouldn't ever see such a case, but it seems wise
to keep the check for safety.) Instead, make transformIndexConstraint()
generate ALTER TABLE SET NOT NULL subcommands to be executed ahead of
the ADD PRIMARY KEY operation in every case where it can't force the
column to be created already-not-null. This requires only minor surgery
in parse_utilcmd.c, and it makes for a much more satisfying spec for
transformIndexConstraint(): it's no longer having to take it on faith
that someone else will handle addition of NOT NULL constraints.
To make that work, we have to move the execution of AT_SetNotNull into
an ALTER pass that executes ahead of AT_PASS_ADD_INDEX. I moved it to
AT_PASS_COL_ATTRS, and put that after AT_PASS_ADD_COL to avoid failure
when the column is being added in the same command. This incidentally
fixes a bug in the only previous usage of AT_PASS_COL_ATTRS, for
AT_SetIdentity: it didn't work either for a newly-added column.
Playing around with this exposed a separate bug in ALTER TABLE ONLY ...
ADD PRIMARY KEY for partitioned tables. The intent of the ONLY modifier
in that context is to prevent doing anything that would require holding
lock for a long time --- but the implied SET NOT NULL would recurse to
the child partitions, and do an expensive validation scan for any child
where the column(s) were not already NOT NULL. To fix that, invent a
new ALTER subcommand AT_CheckNotNull that just insists that a child
column be already NOT NULL, and apply that, not AT_SetNotNull, when
recursing to children in this scenario. This results in a slightly laxer
definition of ALTER TABLE ONLY ... SET NOT NULL for partitioned tables,
too: that command will now work as long as all children are already NOT
NULL, whereas before it just threw up its hands if there were any
partitions.
In passing, clean up the API of generateClonedIndexStmt(): remove a
useless argument, ensure that the output argument is not left undefined,
update the header comment.
A small side effect of this change is that no-such-column errors in ALTER
TABLE ADD PRIMARY KEY now produce a different message that includes the
table name, because they are now detected by the SET NOT NULL step which
has historically worded its error that way. That seems fine to me, so
I didn't make any effort to avoid the wording change.
The basic bug #15580 is of very long standing, and these other bugs
aren't new in v12 either. However, this is a pretty significant change
in the way ALTER TABLE ADD PRIMARY KEY works. On balance it seems best
not to back-patch, at least not till we get some more confidence that
this patch has no new bugs.
Patch by me, but thanks to Jie Zhang for a preliminary version.
Discussion: https://postgr.es/m/15580-d1a6de5a3d65da51@postgresql.org
Discussion: https://postgr.es/m/1396E95157071C4EBBA51892C5368521017F2E6E63@G08CNEXMBPEKD02.g08.fujitsu.local
2019-04-23 18:25:27 +02:00
|
|
|
AT_CheckNotNull, /* check column is already marked not null */
|
2009-08-03 00:14:53 +02:00
|
|
|
AT_SetStatistics, /* alter column set statistics */
|
2010-01-22 17:40:19 +01:00
|
|
|
AT_SetOptions, /* alter column set ( options ) */
|
|
|
|
AT_ResetOptions, /* alter column reset ( options ) */
|
2009-08-03 00:14:53 +02:00
|
|
|
AT_SetStorage, /* alter column set storage */
|
2004-05-05 06:48:48 +02:00
|
|
|
AT_DropColumn, /* drop column */
|
|
|
|
AT_DropColumnRecurse, /* internal to commands/tablecmds.c */
|
|
|
|
AT_AddIndex, /* add index */
|
|
|
|
AT_ReAddIndex, /* internal to commands/tablecmds.c */
|
|
|
|
AT_AddConstraint, /* add constraint */
|
2008-05-10 01:32:05 +02:00
|
|
|
AT_AddConstraintRecurse, /* internal to commands/tablecmds.c */
|
2012-11-05 19:36:16 +01:00
|
|
|
AT_ReAddConstraint, /* internal to commands/tablecmds.c */
|
2017-11-01 18:32:23 +01:00
|
|
|
AT_ReAddDomainConstraint, /* internal to commands/tablecmds.c */
|
2013-06-29 01:27:30 +02:00
|
|
|
AT_AlterConstraint, /* alter constraint */
|
2011-02-08 13:23:20 +01:00
|
|
|
AT_ValidateConstraint, /* validate constraint */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
AT_ValidateConstraintRecurse, /* internal to commands/tablecmds.c */
|
2011-01-25 21:42:03 +01:00
|
|
|
AT_AddIndexConstraint, /* add constraint using existing index */
|
2004-05-05 06:48:48 +02:00
|
|
|
AT_DropConstraint, /* drop constraint */
|
2008-05-10 01:32:05 +02:00
|
|
|
AT_DropConstraintRecurse, /* internal to commands/tablecmds.c */
|
2015-07-20 09:19:22 +02:00
|
|
|
AT_ReAddComment, /* internal to commands/tablecmds.c */
|
2004-05-05 06:48:48 +02:00
|
|
|
AT_AlterColumnType, /* alter column type */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
AT_AlterColumnGenericOptions, /* alter column OPTIONS (...) */
|
2004-05-05 06:48:48 +02:00
|
|
|
AT_ChangeOwner, /* change owner */
|
|
|
|
AT_ClusterOn, /* CLUSTER ON */
|
2004-06-02 23:01:10 +02:00
|
|
|
AT_DropCluster, /* SET WITHOUT CLUSTER */
|
2014-08-22 20:27:00 +02:00
|
|
|
AT_SetLogged, /* SET LOGGED */
|
|
|
|
AT_SetUnLogged, /* SET UNLOGGED */
|
2004-07-12 01:13:58 +02:00
|
|
|
AT_DropOids, /* SET WITHOUT OIDS */
|
2005-08-24 00:40:47 +02:00
|
|
|
AT_SetTableSpace, /* SET TABLESPACE */
|
2006-07-04 00:45:41 +02:00
|
|
|
AT_SetRelOptions, /* SET (...) -- AM specific parameters */
|
|
|
|
AT_ResetRelOptions, /* RESET (...) -- AM specific parameters */
|
2011-12-22 22:15:57 +01:00
|
|
|
AT_ReplaceRelOptions, /* replace reloption list in its entirety */
|
2005-08-24 00:40:47 +02:00
|
|
|
AT_EnableTrig, /* ENABLE TRIGGER name */
|
2007-03-20 00:38:32 +01:00
|
|
|
AT_EnableAlwaysTrig, /* ENABLE ALWAYS TRIGGER name */
|
|
|
|
AT_EnableReplicaTrig, /* ENABLE REPLICA TRIGGER name */
|
2005-08-24 00:40:47 +02:00
|
|
|
AT_DisableTrig, /* DISABLE TRIGGER name */
|
|
|
|
AT_EnableTrigAll, /* ENABLE TRIGGER ALL */
|
|
|
|
AT_DisableTrigAll, /* DISABLE TRIGGER ALL */
|
|
|
|
AT_EnableTrigUser, /* ENABLE TRIGGER USER */
|
2006-07-02 03:58:36 +02:00
|
|
|
AT_DisableTrigUser, /* DISABLE TRIGGER USER */
|
2007-03-20 00:38:32 +01:00
|
|
|
AT_EnableRule, /* ENABLE RULE name */
|
|
|
|
AT_EnableAlwaysRule, /* ENABLE ALWAYS RULE name */
|
|
|
|
AT_EnableReplicaRule, /* ENABLE REPLICA RULE name */
|
|
|
|
AT_DisableRule, /* DISABLE RULE name */
|
2006-10-13 23:43:19 +02:00
|
|
|
AT_AddInherit, /* INHERIT parent */
|
2011-01-02 05:48:11 +01:00
|
|
|
AT_DropInherit, /* NO INHERIT parent */
|
2011-04-21 03:35:15 +02:00
|
|
|
AT_AddOf, /* OF <type_name> */
|
|
|
|
AT_DropOf, /* NOT OF */
|
2013-11-08 18:30:43 +01:00
|
|
|
AT_ReplicaIdentity, /* REPLICA IDENTITY */
|
Row-Level Security Policies (RLS)
Building on the updatable security-barrier views work, add the
ability to define policies on tables to limit the set of rows
which are returned from a query and which are allowed to be added
to a table. Expressions defined by the policy for filtering are
added to the security barrier quals of the query, while expressions
defined to check records being added to a table are added to the
with-check options of the query.
New top-level commands are CREATE/ALTER/DROP POLICY and are
controlled by the table owner. Row Security is able to be enabled
and disabled by the owner on a per-table basis using
ALTER TABLE .. ENABLE/DISABLE ROW SECURITY.
Per discussion, ROW SECURITY is disabled on tables by default and
must be enabled for policies on the table to be used. If no
policies exist on a table with ROW SECURITY enabled, a default-deny
policy is used and no records will be visible.
By default, row security is applied at all times except for the
table owner and the superuser. A new GUC, row_security, is added
which can be set to ON, OFF, or FORCE. When set to FORCE, row
security will be applied even for the table owner and superusers.
When set to OFF, row security will be disabled when allowed and an
error will be thrown if the user does not have rights to bypass row
security.
Per discussion, pg_dump sets row_security = OFF by default to ensure
that exports and backups will have all data in the table or will
error if there are insufficient privileges to bypass row security.
A new option has been added to pg_dump, --enable-row-security, to
ask pg_dump to export with row security enabled.
A new role capability, BYPASSRLS, which can only be set by the
superuser, is added to allow other users to be able to bypass row
security using row_security = OFF.
Many thanks to the various individuals who have helped with the
design, particularly Robert Haas for his feedback.
Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean
Rasheed, with additional changes and rework by me.
Reviewers have included all of the above, Greg Smith,
Jeff McCormick, and Robert Haas.
2014-09-19 17:18:35 +02:00
|
|
|
AT_EnableRowSecurity, /* ENABLE ROW SECURITY */
|
|
|
|
AT_DisableRowSecurity, /* DISABLE ROW SECURITY */
|
2015-10-05 03:05:08 +02:00
|
|
|
AT_ForceRowSecurity, /* FORCE ROW SECURITY */
|
|
|
|
AT_NoForceRowSecurity, /* NO FORCE ROW SECURITY */
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
AT_GenericOptions, /* OPTIONS (...) */
|
|
|
|
AT_AttachPartition, /* ATTACH PARTITION */
|
2017-04-06 14:33:16 +02:00
|
|
|
AT_DetachPartition, /* DETACH PARTITION */
|
|
|
|
AT_AddIdentity, /* ADD IDENTITY */
|
|
|
|
AT_SetIdentity, /* SET identity column options */
|
|
|
|
AT_DropIdentity /* DROP IDENTITY */
|
2004-05-05 06:48:48 +02:00
|
|
|
} AlterTableType;
|
|
|
|
|
2013-11-08 18:30:43 +01:00
|
|
|
typedef struct ReplicaIdentityStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char identity_type;
|
|
|
|
char *name;
|
|
|
|
} ReplicaIdentityStmt;
|
|
|
|
|
2004-05-05 06:48:48 +02:00
|
|
|
typedef struct AlterTableCmd /* one subcommand of an ALTER TABLE */
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
AlterTableType subtype; /* Type of table alteration to apply */
|
2005-08-24 00:40:47 +02:00
|
|
|
char *name; /* column, constraint, or trigger to act on,
|
Allow CURRENT/SESSION_USER to be used in certain commands
Commands such as ALTER USER, ALTER GROUP, ALTER ROLE, GRANT, and the
various ALTER OBJECT / OWNER TO, as well as ad-hoc clauses related to
roles such as the AUTHORIZATION clause of CREATE SCHEMA, the FOR clause
of CREATE USER MAPPING, and the FOR ROLE clause of ALTER DEFAULT
PRIVILEGES can now take the keywords CURRENT_USER and SESSION_USER as
user specifiers in place of an explicit user name.
This commit also fixes some quite ugly handling of special standards-
mandated syntax in CREATE USER MAPPING, which in particular would fail
to work in presence of a role named "current_user".
The special role specifiers PUBLIC and NONE also have more consistent
handling now.
Also take the opportunity to add location tracking to user specifiers.
Authors: Kyotaro Horiguchi. Heavily reworked by Álvaro Herrera.
Reviewed by: Rushabh Lathia, Adam Brightwell, Marti Raudsepp.
2015-03-09 19:41:54 +01:00
|
|
|
* or tablespace */
|
2017-10-04 00:53:44 +02:00
|
|
|
int16 num; /* attribute number for columns referenced by
|
|
|
|
* number */
|
2016-12-28 18:00:00 +01:00
|
|
|
RoleSpec *newowner;
|
Remove collation information from TypeName, where it does not belong.
The initial collations patch treated a COLLATE spec as part of a TypeName,
following what can only be described as brain fade on the part of the SQL
committee. It's a lot more reasonable to treat COLLATE as a syntactically
separate object, so that it can be added in only the productions where it
actually belongs, rather than needing to reject it in a boatload of places
where it doesn't belong (something the original patch mostly failed to do).
In addition this change lets us meet the spec's requirement to allow
COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly
behavior for constructs such as "foo::type COLLATE collation".
To do this, pull collation information out of TypeName and put it in
ColumnDef instead, thus reverting most of the collation-related changes in
parse_type.c's API. I made one additional structural change, which was to
use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd
nodes. This provides enough room to get rid of the "transform" wart in
AlterTableCmd too, since the ColumnDef can carry the USING expression
easily enough.
Also fix some other minor bugs that have crept in in the same areas,
like failure to copy recently-added fields of ColumnDef in copyfuncs.c.
While at it, document the formerly secret ability to specify a collation
in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and
ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about
what the default collation selection will be when COLLATE is omitted.
BTW, the three-parameter form of format_type() should go away too,
since it just contributes to the confusion in this area; but I'll do
that in a separate patch.
2011-03-10 04:38:52 +01:00
|
|
|
Node *def; /* definition of new column, index,
|
|
|
|
* constraint, or parent table */
|
2002-07-01 17:27:56 +02:00
|
|
|
DropBehavior behavior; /* RESTRICT or CASCADE for DROP cases */
|
2009-07-20 04:42:28 +02:00
|
|
|
bool missing_ok; /* skip error if missing? */
|
2004-05-05 06:48:48 +02:00
|
|
|
} AlterTableCmd;
|
|
|
|
|
1997-05-22 02:17:24 +02:00
|
|
|
|
2017-03-23 20:25:34 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Alter Collation
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct AlterCollationStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *collname;
|
|
|
|
} AlterCollationStmt;
|
|
|
|
|
|
|
|
|
2002-12-06 06:00:34 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Alter Domain
|
|
|
|
*
|
|
|
|
* The fields are used in different ways by the different variants of
|
2004-05-05 06:48:48 +02:00
|
|
|
* this command.
|
2002-12-06 06:00:34 +01:00
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct AlterDomainStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char subtype; /*------------
|
|
|
|
* T = alter column default
|
|
|
|
* N = alter column drop not null
|
|
|
|
* O = alter column set not null
|
|
|
|
* C = add constraint
|
|
|
|
* X = drop constraint
|
|
|
|
*------------
|
|
|
|
*/
|
2009-07-16 08:33:46 +02:00
|
|
|
List *typeName; /* domain to work on */
|
2004-06-25 23:55:59 +02:00
|
|
|
char *name; /* column or constraint name to act on */
|
2002-12-06 06:00:34 +01:00
|
|
|
Node *def; /* definition of default or constraint */
|
|
|
|
DropBehavior behavior; /* RESTRICT or CASCADE for DROP cases */
|
2012-01-05 18:48:55 +01:00
|
|
|
bool missing_ok; /* skip error if missing? */
|
2003-08-08 23:42:59 +02:00
|
|
|
} AlterDomainStmt;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
|
1996-08-28 03:59:28 +02:00
|
|
|
/* ----------------------
|
2002-04-21 02:26:44 +02:00
|
|
|
* Grant|Revoke Statement
|
1996-08-28 03:59:28 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2009-10-12 22:39:42 +02:00
|
|
|
typedef enum GrantTargetType
|
|
|
|
{
|
|
|
|
ACL_TARGET_OBJECT, /* grant on specific named object(s) */
|
|
|
|
ACL_TARGET_ALL_IN_SCHEMA, /* grant on all objects in given schema(s) */
|
|
|
|
ACL_TARGET_DEFAULTS /* ALTER DEFAULT PRIVILEGES */
|
|
|
|
} GrantTargetType;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct GrantStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-04-21 02:26:44 +02:00
|
|
|
bool is_grant; /* true = GRANT, false = REVOKE */
|
2009-10-12 22:39:42 +02:00
|
|
|
GrantTargetType targtype; /* type of the grant target */
|
2017-10-12 00:35:19 +02:00
|
|
|
ObjectType objtype; /* kind of object being operated on */
|
2017-05-17 22:31:56 +02:00
|
|
|
List *objects; /* list of RangeVar nodes, ObjectWithArgs
|
|
|
|
* nodes, or plain names (as Value strings) */
|
2009-01-22 21:16:10 +01:00
|
|
|
List *privileges; /* list of AccessPriv nodes */
|
|
|
|
/* privileges == NIL denotes ALL PRIVILEGES */
|
Allow CURRENT/SESSION_USER to be used in certain commands
Commands such as ALTER USER, ALTER GROUP, ALTER ROLE, GRANT, and the
various ALTER OBJECT / OWNER TO, as well as ad-hoc clauses related to
roles such as the AUTHORIZATION clause of CREATE SCHEMA, the FOR clause
of CREATE USER MAPPING, and the FOR ROLE clause of ALTER DEFAULT
PRIVILEGES can now take the keywords CURRENT_USER and SESSION_USER as
user specifiers in place of an explicit user name.
This commit also fixes some quite ugly handling of special standards-
mandated syntax in CREATE USER MAPPING, which in particular would fail
to work in presence of a role named "current_user".
The special role specifiers PUBLIC and NONE also have more consistent
handling now.
Also take the opportunity to add location tracking to user specifiers.
Authors: Kyotaro Horiguchi. Heavily reworked by Álvaro Herrera.
Reviewed by: Rushabh Lathia, Adam Brightwell, Marti Raudsepp.
2015-03-09 19:41:54 +01:00
|
|
|
List *grantees; /* list of RoleSpec nodes */
|
2003-01-24 00:39:07 +01:00
|
|
|
bool grant_option; /* grant or revoke grant option */
|
|
|
|
DropBehavior behavior; /* drop behavior (for REVOKE) */
|
2002-03-08 05:37:18 +01:00
|
|
|
} GrantStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2005-03-29 19:58:51 +02:00
|
|
|
/*
|
2016-12-28 18:00:00 +01:00
|
|
|
* Note: ObjectWithArgs carries only the types of the input parameters of the
|
2005-03-29 19:58:51 +02:00
|
|
|
* function. So it is sufficient to identify an existing function, but it
|
|
|
|
* is not enough info to define a function nor to call it.
|
|
|
|
*/
|
2016-12-28 18:00:00 +01:00
|
|
|
typedef struct ObjectWithArgs
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2016-12-28 18:00:00 +01:00
|
|
|
List *objname; /* qualified name of function/operator */
|
|
|
|
List *objargs; /* list of Typename nodes */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
bool args_unspecified; /* argument list was omitted, so name must
|
|
|
|
* be unique (note that objargs == NIL
|
|
|
|
* means zero args) */
|
2016-12-28 18:00:00 +01:00
|
|
|
} ObjectWithArgs;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2009-01-22 21:16:10 +01:00
|
|
|
/*
|
|
|
|
* An access privilege, with optional list of column names
|
|
|
|
* priv_name == NULL denotes ALL PRIVILEGES (only used with a column list)
|
|
|
|
* cols == NIL denotes "all columns"
|
|
|
|
* Note that simple "ALL PRIVILEGES" is represented as a NIL list, not
|
|
|
|
* an AccessPriv with both fields null.
|
|
|
|
*/
|
|
|
|
typedef struct AccessPriv
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2009-01-22 21:16:10 +01:00
|
|
|
char *priv_name; /* string name of privilege */
|
|
|
|
List *cols; /* list of Value strings */
|
|
|
|
} AccessPriv;
|
2002-03-08 05:37:18 +01:00
|
|
|
|
2005-06-28 07:09:14 +02:00
|
|
|
/* ----------------------
|
|
|
|
* Grant/Revoke Role Statement
|
|
|
|
*
|
2009-01-22 21:16:10 +01:00
|
|
|
* Note: because of the parsing ambiguity with the GRANT <privileges>
|
|
|
|
* statement, granted_roles is a list of AccessPriv; the execution code
|
2014-05-06 18:12:18 +02:00
|
|
|
* should complain if any column lists appear. grantee_roles is a list
|
2009-01-22 21:16:10 +01:00
|
|
|
* of role names, as Value strings.
|
2005-06-28 07:09:14 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct GrantRoleStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *granted_roles; /* list of roles to be granted/revoked */
|
|
|
|
List *grantee_roles; /* list of member roles to add/delete */
|
|
|
|
bool is_grant; /* true = GRANT, false = REVOKE */
|
|
|
|
bool admin_opt; /* with admin option */
|
2016-12-28 18:00:00 +01:00
|
|
|
RoleSpec *grantor; /* set grantor to other than current role */
|
2005-06-28 07:09:14 +02:00
|
|
|
DropBehavior behavior; /* drop behavior (for REVOKE) */
|
|
|
|
} GrantRoleStmt;
|
|
|
|
|
2009-10-05 21:24:49 +02:00
|
|
|
/* ----------------------
|
|
|
|
* Alter Default Privileges Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct AlterDefaultPrivilegesStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *options; /* list of DefElem */
|
|
|
|
GrantStmt *action; /* GRANT/REVOKE action (with objects=NIL) */
|
|
|
|
} AlterDefaultPrivilegesStmt;
|
|
|
|
|
1996-08-28 03:59:28 +02:00
|
|
|
/* ----------------------
|
2002-03-08 05:37:18 +01:00
|
|
|
* Copy Statement
|
2006-08-31 01:34:22 +02:00
|
|
|
*
|
|
|
|
* We support "COPY relation FROM file", "COPY relation TO file", and
|
2014-05-06 18:12:18 +02:00
|
|
|
* "COPY (query) TO file". In any given CopyStmt, exactly one of "relation"
|
2007-03-13 01:33:44 +01:00
|
|
|
* and "query" must be non-NULL.
|
1996-08-28 03:59:28 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct CopyStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-21 17:02:16 +01:00
|
|
|
RangeVar *relation; /* the relation to copy */
|
2015-11-27 17:11:22 +01:00
|
|
|
Node *query; /* the query (SELECT or DML statement with
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
|
|
|
* RETURNING) to copy, as a raw parse tree */
|
2005-10-15 04:49:52 +02:00
|
|
|
List *attlist; /* List of column names (as Strings), or NIL
|
|
|
|
* for all columns */
|
2002-06-20 18:00:44 +02:00
|
|
|
bool is_from; /* TO or FROM */
|
Add support for piping COPY to/from an external program.
This includes backend "COPY TO/FROM PROGRAM '...'" syntax, and corresponding
psql \copy syntax. Like with reading/writing files, the backend version is
superuser-only, and in the psql version, the program is run in the client.
In the passing, the psql \copy STDIN/STDOUT syntax is subtly changed: if you
the stdin/stdout is quoted, it's now interpreted as a filename. For example,
"\copy foo from 'stdin'" now reads from a file called 'stdin', not from
standard input. Before this, there was no way to specify a filename called
stdin, stdout, pstdin or pstdout.
This creates a new function in pgport, wait_result_to_str(), which can
be used to convert the exit status of a process, as returned by wait(3),
to a human-readable string.
Etsuro Fujita, reviewed by Amit Kapila.
2013-02-27 17:17:21 +01:00
|
|
|
bool is_program; /* is 'filename' a program to popen? */
|
2006-08-31 01:34:22 +02:00
|
|
|
char *filename; /* filename, or NULL for STDIN/STDOUT */
|
2002-06-20 18:00:44 +02:00
|
|
|
List *options; /* List of DefElem nodes */
|
2019-01-19 23:48:16 +01:00
|
|
|
Node *whereClause; /* WHERE condition (or NULL) */
|
2002-03-08 05:37:18 +01:00
|
|
|
} CopyStmt;
|
1998-08-25 23:37:08 +02:00
|
|
|
|
2007-09-03 20:46:30 +02:00
|
|
|
/* ----------------------
|
|
|
|
* SET Statement (includes RESET)
|
|
|
|
*
|
|
|
|
* "SET var TO DEFAULT" and "RESET var" are semantically equivalent, but we
|
|
|
|
* preserve the distinction in VariableSetKind for CreateCommandTag().
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef enum
|
|
|
|
{
|
|
|
|
VAR_SET_VALUE, /* SET var = value */
|
|
|
|
VAR_SET_DEFAULT, /* SET var TO DEFAULT */
|
|
|
|
VAR_SET_CURRENT, /* SET var FROM CURRENT */
|
|
|
|
VAR_SET_MULTI, /* special case for SET TRANSACTION ... */
|
|
|
|
VAR_RESET, /* RESET var */
|
|
|
|
VAR_RESET_ALL /* RESET ALL */
|
2007-11-15 23:25:18 +01:00
|
|
|
} VariableSetKind;
|
2007-09-03 20:46:30 +02:00
|
|
|
|
|
|
|
typedef struct VariableSetStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
VariableSetKind kind;
|
|
|
|
char *name; /* variable to be set */
|
|
|
|
List *args; /* List of A_Const nodes */
|
|
|
|
bool is_local; /* SET LOCAL? */
|
|
|
|
} VariableSetStmt;
|
|
|
|
|
|
|
|
/* ----------------------
|
|
|
|
* Show Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct VariableShowStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *name;
|
|
|
|
} VariableShowStmt;
|
|
|
|
|
1998-08-25 23:37:08 +02:00
|
|
|
/* ----------------------
|
2002-03-08 05:37:18 +01:00
|
|
|
* Create Table Statement
|
|
|
|
*
|
2009-07-30 04:45:38 +02:00
|
|
|
* NOTE: in the raw gram.y output, ColumnDef and Constraint nodes are
|
|
|
|
* intermixed in tableElts, and constraints is NIL. After parse analysis,
|
|
|
|
* tableElts contains just ColumnDefs, and constraints contains just
|
|
|
|
* Constraint nodes (in fact, only CONSTR_CHECK nodes, in the present
|
2002-03-08 05:37:18 +01:00
|
|
|
* implementation).
|
1998-08-25 23:37:08 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-11-11 23:19:25 +01:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct CreateStmt
|
1998-08-25 23:37:08 +02:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2002-03-21 17:02:16 +01:00
|
|
|
RangeVar *relation; /* relation to create */
|
2002-03-08 05:37:18 +01:00
|
|
|
List *tableElts; /* column definitions (list of ColumnDef) */
|
2003-08-04 02:43:34 +02:00
|
|
|
List *inhRelations; /* relations to inherit from (list of
|
|
|
|
* inhRelation) */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
PartitionBoundSpec *partbound; /* FOR VALUES clause */
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
PartitionSpec *partspec; /* PARTITION BY clause */
|
2010-01-29 00:21:13 +01:00
|
|
|
TypeName *ofTypename; /* OF typename */
|
2002-03-08 05:37:18 +01:00
|
|
|
List *constraints; /* constraints (list of Constraint nodes) */
|
2006-07-04 00:45:41 +02:00
|
|
|
List *options; /* options from WITH clause */
|
2002-11-11 23:19:25 +01:00
|
|
|
OnCommitAction oncommit; /* what do we do at COMMIT? */
|
2004-08-29 07:07:03 +02:00
|
|
|
char *tablespacename; /* table space to use, or NULL */
|
tableam: introduce table AM infrastructure.
This introduces the concept of table access methods, i.e. CREATE
ACCESS METHOD ... TYPE TABLE and
CREATE TABLE ... USING (storage-engine).
No table access functionality is delegated to table AMs as of this
commit, that'll be done in following commits.
Subsequent commits will incrementally abstract table access
functionality to be routed through table access methods. That change
is too large to be reviewed & committed at once, so it'll be done
incrementally.
Docs will be updated at the end, as adding them incrementally would
likely make them less coherent, and definitely is a lot more work,
without a lot of benefit.
Table access methods are specified similar to index access methods,
i.e. pg_am.amhandler returns, as INTERNAL, a pointer to a struct with
callbacks. In contrast to index AMs that struct needs to live as long
as a backend, typically that's achieved by just returning a pointer to
a constant struct.
Psql's \d+ now displays a table's access method. That can be disabled
with HIDE_TABLEAM=true, which is mainly useful so regression tests can
be run against different AMs. It's quite possible that this behaviour
still needs to be fine tuned.
For now it's not allowed to set a table AM for a partitioned table, as
we've not resolved how partitions would inherit that. Disallowing
allows us to introduce, if we decide that's the way forward, such a
behaviour without a compatibility break.
Catversion bumped, to add the heap table AM and references to it.
Author: Haribabu Kommi, Andres Freund, Alvaro Herrera, Dimitri Golgov and others
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
https://postgr.es/m/20190107235616.6lur25ph22u5u5av@alap3.anarazel.de
https://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de
2019-03-06 18:54:38 +01:00
|
|
|
char *accessMethod; /* table access method */
|
2010-07-26 01:21:22 +02:00
|
|
|
bool if_not_exists; /* just do nothing if it already exists? */
|
2002-03-08 05:37:18 +01:00
|
|
|
} CreateStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------
|
2009-07-30 04:45:38 +02:00
|
|
|
* Definitions for constraints in CreateStmt
|
2002-03-08 05:37:18 +01:00
|
|
|
*
|
2009-07-30 04:45:38 +02:00
|
|
|
* Note that column defaults are treated as a type of constraint,
|
|
|
|
* even though that's a bit odd semantically.
|
2002-03-08 05:37:18 +01:00
|
|
|
*
|
2009-07-30 04:45:38 +02:00
|
|
|
* For constraints that use expressions (CONSTR_CHECK, CONSTR_DEFAULT)
|
2002-03-08 05:37:18 +01:00
|
|
|
* we may have the expression in either "raw" form (an untransformed
|
|
|
|
* parse tree) or "cooked" form (the nodeToString representation of
|
|
|
|
* an executable expression tree), depending on how this Constraint
|
|
|
|
* node was created (by parsing, or by inheritance from an existing
|
|
|
|
* relation). We should never have both in the same node!
|
|
|
|
*
|
2009-07-30 04:45:38 +02:00
|
|
|
* FKCONSTR_ACTION_xxx values are stored into pg_constraint.confupdtype
|
|
|
|
* and pg_constraint.confdeltype columns; FKCONSTR_MATCH_xxx values are
|
|
|
|
* stored into pg_constraint.confmatchtype. Changing the code values may
|
|
|
|
* require an initdb!
|
|
|
|
*
|
|
|
|
* If skip_validation is true then we skip checking that the existing rows
|
|
|
|
* in the table satisfy the constraint, and just install the catalog entries
|
2014-05-06 18:12:18 +02:00
|
|
|
* for the constraint. A new FK constraint is marked as valid iff
|
2011-06-16 01:05:11 +02:00
|
|
|
* initially_valid is true. (Usually skip_validation and initially_valid
|
|
|
|
* are inverses, but we can set both true if the table is known empty.)
|
2009-07-30 04:45:38 +02:00
|
|
|
*
|
2002-03-08 05:37:18 +01:00
|
|
|
* Constraint attributes (DEFERRABLE etc) are initially represented as
|
2007-06-24 00:12:52 +02:00
|
|
|
* separate Constraint nodes for simplicity of parsing. parse_utilcmd.c makes
|
2009-07-30 04:45:38 +02:00
|
|
|
* a pass through the constraints list to insert the info into the appropriate
|
|
|
|
* Constraint node.
|
2002-03-08 05:37:18 +01:00
|
|
|
* ----------
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
|
|
|
|
typedef enum ConstrType /* types of constraints */
|
|
|
|
{
|
2013-05-29 22:58:43 +02:00
|
|
|
CONSTR_NULL, /* not standard SQL, but a lot of people
|
|
|
|
* expect it */
|
2002-03-08 05:37:18 +01:00
|
|
|
CONSTR_NOTNULL,
|
|
|
|
CONSTR_DEFAULT,
|
2017-04-06 14:33:16 +02:00
|
|
|
CONSTR_IDENTITY,
|
2019-03-30 08:13:09 +01:00
|
|
|
CONSTR_GENERATED,
|
2002-03-08 05:37:18 +01:00
|
|
|
CONSTR_CHECK,
|
|
|
|
CONSTR_PRIMARY,
|
|
|
|
CONSTR_UNIQUE,
|
2009-12-07 06:22:23 +01:00
|
|
|
CONSTR_EXCLUSION,
|
2009-07-30 04:45:38 +02:00
|
|
|
CONSTR_FOREIGN,
|
2002-03-08 05:37:18 +01:00
|
|
|
CONSTR_ATTR_DEFERRABLE, /* attributes for previous constraint node */
|
|
|
|
CONSTR_ATTR_NOT_DEFERRABLE,
|
|
|
|
CONSTR_ATTR_DEFERRED,
|
|
|
|
CONSTR_ATTR_IMMEDIATE
|
|
|
|
} ConstrType;
|
|
|
|
|
2009-07-30 04:45:38 +02:00
|
|
|
/* Foreign key action codes */
|
2002-07-12 20:43:19 +02:00
|
|
|
#define FKCONSTR_ACTION_NOACTION 'a'
|
|
|
|
#define FKCONSTR_ACTION_RESTRICT 'r'
|
|
|
|
#define FKCONSTR_ACTION_CASCADE 'c'
|
|
|
|
#define FKCONSTR_ACTION_SETNULL 'n'
|
|
|
|
#define FKCONSTR_ACTION_SETDEFAULT 'd'
|
2002-03-08 05:37:18 +01:00
|
|
|
|
2009-07-30 04:45:38 +02:00
|
|
|
/* Foreign key matchtype codes */
|
2002-07-12 20:43:19 +02:00
|
|
|
#define FKCONSTR_MATCH_FULL 'f'
|
|
|
|
#define FKCONSTR_MATCH_PARTIAL 'p'
|
2012-06-18 02:16:07 +02:00
|
|
|
#define FKCONSTR_MATCH_SIMPLE 's'
|
2002-03-08 05:37:18 +01:00
|
|
|
|
2009-07-30 04:45:38 +02:00
|
|
|
typedef struct Constraint
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2009-07-30 04:45:38 +02:00
|
|
|
ConstrType contype; /* see above */
|
|
|
|
|
|
|
|
/* Fields used for most/all constraint types: */
|
|
|
|
char *conname; /* Constraint name, or NULL if unnamed */
|
|
|
|
bool deferrable; /* DEFERRABLE? */
|
|
|
|
bool initdeferred; /* INITIALLY DEFERRED? */
|
|
|
|
int location; /* token location, or -1 if unknown */
|
|
|
|
|
|
|
|
/* Fields used for constraints with expressions (CHECK and DEFAULT): */
|
2012-04-21 04:46:20 +02:00
|
|
|
bool is_no_inherit; /* is constraint non-inheritable? */
|
2009-07-30 04:45:38 +02:00
|
|
|
Node *raw_expr; /* expr, as untransformed parse tree */
|
|
|
|
char *cooked_expr; /* expr, as nodeToString representation */
|
2019-05-22 18:55:34 +02:00
|
|
|
char generated_when; /* ALWAYS or BY DEFAULT */
|
2009-07-30 04:45:38 +02:00
|
|
|
|
2009-12-07 06:22:23 +01:00
|
|
|
/* Fields used for unique constraints (UNIQUE and PRIMARY KEY): */
|
2018-04-07 22:00:39 +02:00
|
|
|
List *keys; /* String nodes naming referenced key
|
|
|
|
* column(s) */
|
|
|
|
List *including; /* String nodes naming referenced nonkey
|
|
|
|
* column(s) */
|
2009-12-07 06:22:23 +01:00
|
|
|
|
|
|
|
/* Fields used for EXCLUSION constraints: */
|
|
|
|
List *exclusions; /* list of (IndexElem, operator name) pairs */
|
|
|
|
|
|
|
|
/* Fields used for index constraints (UNIQUE, PRIMARY KEY, EXCLUSION): */
|
2009-07-30 04:45:38 +02:00
|
|
|
List *options; /* options from WITH clause */
|
2011-01-25 21:42:03 +01:00
|
|
|
char *indexname; /* existing index to use; otherwise NULL */
|
2009-07-30 04:45:38 +02:00
|
|
|
char *indexspace; /* index tablespace; NULL for default */
|
Fix tablespace inheritance for partitioned rels
Commit ca4103025dfe left a few loose ends. The most important one
(broken pg_dump output) is already fixed by virtue of commit
3b23552ad8bb, but some things remained:
* When ALTER TABLE rewrites tables, the indexes must remain in the
tablespace they were originally in. This didn't work because
index recreation during ALTER TABLE runs manufactured SQL (yuck),
which runs afoul of default_tablespace in competition with the parent
relation tablespace. To fix, reset default_tablespace to the empty
string temporarily, and add the TABLESPACE clause as appropriate.
* Setting a partitioned rel's tablespace to the database default is
confusing; if it worked, it would direct the partitions to that
tablespace regardless of default_tablespace. But in reality it does
not work, and making it work is a larger project. Therefore, throw
an error when this condition is detected, to alert the unwary.
Add some docs and tests, too.
Author: Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f_1c260nOt_vBJ067AZ3JXptXVRohDVMLEBmudX1YEx-A@mail.gmail.com
2019-04-25 16:20:23 +02:00
|
|
|
bool reset_default_tblspc; /* reset default_tablespace prior to
|
|
|
|
* creating the index */
|
2009-12-07 06:22:23 +01:00
|
|
|
/* These could be, but currently are not, used for UNIQUE/PKEY: */
|
|
|
|
char *access_method; /* index access method; NULL for default */
|
|
|
|
Node *where_clause; /* partial index predicate */
|
2009-07-30 04:45:38 +02:00
|
|
|
|
|
|
|
/* Fields used for FOREIGN KEY constraints: */
|
2002-03-21 17:02:16 +01:00
|
|
|
RangeVar *pktable; /* Primary key table */
|
2002-03-08 05:37:18 +01:00
|
|
|
List *fk_attrs; /* Attributes of foreign key */
|
|
|
|
List *pk_attrs; /* Corresponding attrs in PK table */
|
2012-06-18 02:16:07 +02:00
|
|
|
char fk_matchtype; /* FULL, PARTIAL, SIMPLE */
|
2002-07-12 20:43:19 +02:00
|
|
|
char fk_upd_action; /* ON UPDATE action */
|
|
|
|
char fk_del_action; /* ON DELETE action */
|
ALTER TABLE: skip FK validation when it's safe to do so
We already skip rewriting the table in these cases, but we still force a
whole table scan to validate the data. This can be skipped, and thus
we can make the whole ALTER TABLE operation just do some catalog touches
instead of scanning the table, when these two conditions hold:
(a) Old and new pg_constraint.conpfeqop match exactly. This is actually
stronger than needed; we could loosen things by way of operator
families, but it'd require a lot more effort.
(b) The functions, if any, implementing a cast from the foreign type to
the primary opcintype are the same. For this purpose, we can consider a
binary coercion equivalent to an exact type match. When the opcintype
is polymorphic, require that the old and new foreign types match
exactly. (Since ri_triggers.c does use the executor, the stronger check
for polymorphic types is no mere future-proofing. However, no core type
exercises its necessity.)
Author: Noah Misch
Committer's note: catalog version bumped due to change of the Constraint
node. I can't actually find any way to have such a node in a stored
rule, but given that we have "out" support for them, better be safe.
2012-02-27 22:28:00 +01:00
|
|
|
List *old_conpfeqop; /* pg_constraint.conpfeqop of my former self */
|
2017-06-21 20:39:04 +02:00
|
|
|
Oid old_pktable_oid; /* pg_constraint.confrelid of my former
|
|
|
|
* self */
|
2011-06-02 00:43:50 +02:00
|
|
|
|
|
|
|
/* Fields used for constraints that allow a NOT VALID specification */
|
2002-09-04 22:31:48 +02:00
|
|
|
bool skip_validation; /* skip validation of existing rows? */
|
2011-06-16 01:05:11 +02:00
|
|
|
bool initially_valid; /* mark the new constraint as valid? */
|
2009-07-30 04:45:38 +02:00
|
|
|
} Constraint;
|
2004-06-18 08:14:31 +02:00
|
|
|
|
|
|
|
/* ----------------------
|
2004-08-29 07:07:03 +02:00
|
|
|
* Create/Drop Table Space Statements
|
2004-06-18 08:14:31 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef struct CreateTableSpaceStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *tablespacename;
|
2016-12-28 18:00:00 +01:00
|
|
|
RoleSpec *owner;
|
2004-06-18 08:14:31 +02:00
|
|
|
char *location;
|
2014-01-19 02:59:31 +01:00
|
|
|
List *options;
|
2004-06-18 08:14:31 +02:00
|
|
|
} CreateTableSpaceStmt;
|
|
|
|
|
|
|
|
typedef struct DropTableSpaceStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *tablespacename;
|
2007-01-23 06:07:18 +01:00
|
|
|
bool missing_ok; /* skip error if missing? */
|
2004-06-18 08:14:31 +02:00
|
|
|
} DropTableSpaceStmt;
|
|
|
|
|
2010-01-05 22:54:00 +01:00
|
|
|
typedef struct AlterTableSpaceOptionsStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *tablespacename;
|
|
|
|
List *options;
|
|
|
|
bool isReset;
|
|
|
|
} AlterTableSpaceOptionsStmt;
|
|
|
|
|
2014-08-22 01:06:17 +02:00
|
|
|
typedef struct AlterTableMoveAllStmt
|
2014-01-19 00:56:40 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *orig_tablespacename;
|
2014-08-22 01:06:17 +02:00
|
|
|
ObjectType objtype; /* Object type to move */
|
2014-01-24 05:52:40 +01:00
|
|
|
List *roles; /* List of roles to move objects of */
|
2014-01-19 00:56:40 +01:00
|
|
|
char *new_tablespacename;
|
|
|
|
bool nowait;
|
2014-08-22 01:06:17 +02:00
|
|
|
} AlterTableMoveAllStmt;
|
2014-01-19 00:56:40 +01:00
|
|
|
|
2011-02-08 22:08:41 +01:00
|
|
|
/* ----------------------
|
2011-02-09 17:55:32 +01:00
|
|
|
* Create/Alter Extension Statements
|
2011-02-08 22:08:41 +01:00
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef struct CreateExtensionStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *extname;
|
2011-03-04 22:08:24 +01:00
|
|
|
bool if_not_exists; /* just do nothing if it already exists? */
|
2011-02-08 22:08:41 +01:00
|
|
|
List *options; /* List of DefElem nodes */
|
|
|
|
} CreateExtensionStmt;
|
|
|
|
|
2011-02-12 03:25:20 +01:00
|
|
|
/* Only used for ALTER EXTENSION UPDATE; later might need an action field */
|
|
|
|
typedef struct AlterExtensionStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *extname;
|
|
|
|
List *options; /* List of DefElem nodes */
|
|
|
|
} AlterExtensionStmt;
|
|
|
|
|
2011-02-10 23:36:44 +01:00
|
|
|
typedef struct AlterExtensionContentsStmt
|
2011-02-09 17:55:32 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *extname; /* Extension's name */
|
2011-02-10 23:36:44 +01:00
|
|
|
int action; /* +1 = add object, -1 = drop object */
|
2011-02-09 17:55:32 +01:00
|
|
|
ObjectType objtype; /* Object's type */
|
Remove objname/objargs split for referring to objects
In simpler times, it might have worked to refer to all kinds of objects
by a list of name components and an optional argument list. But this
doesn't work for all objects, which has resulted in a collection of
hacks to place various other nodes types into these fields, which have
to be unpacked at the other end. This makes it also weird to represent
lists of such things in the grammar, because they would have to be lists
of singleton lists, to make the unpacking work consistently. The other
problem is that keeping separate name and args fields makes it awkward
to deal with lists of functions.
Change that by dropping the objargs field and have objname, renamed to
object, be a generic Node, which can then be flexibly assigned and
managed using the normal Node mechanisms. In many cases it will still
be a List of names, in some cases it will be a string Value, for types
it will be the existing Typename, for functions it will now use the
existing ObjectWithArgs node type. Some of the more obscure object
types still use somewhat arbitrary nested lists.
Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-12 18:00:00 +01:00
|
|
|
Node *object; /* Qualified name of the object */
|
2011-02-10 23:36:44 +01:00
|
|
|
} AlterExtensionContentsStmt;
|
2011-02-09 17:55:32 +01:00
|
|
|
|
2008-12-19 17:25:19 +01:00
|
|
|
/* ----------------------
|
2011-11-18 03:31:29 +01:00
|
|
|
* Create/Alter FOREIGN DATA WRAPPER Statements
|
2008-12-19 17:25:19 +01:00
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef struct CreateFdwStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *fdwname; /* foreign-data wrapper name */
|
2011-02-19 06:06:18 +01:00
|
|
|
List *func_options; /* HANDLER/VALIDATOR options */
|
2008-12-19 17:25:19 +01:00
|
|
|
List *options; /* generic options to FDW */
|
|
|
|
} CreateFdwStmt;
|
|
|
|
|
|
|
|
typedef struct AlterFdwStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *fdwname; /* foreign-data wrapper name */
|
2011-02-19 06:06:18 +01:00
|
|
|
List *func_options; /* HANDLER/VALIDATOR options */
|
2008-12-19 17:25:19 +01:00
|
|
|
List *options; /* generic options to FDW */
|
|
|
|
} AlterFdwStmt;
|
|
|
|
|
|
|
|
/* ----------------------
|
2011-11-18 03:31:29 +01:00
|
|
|
* Create/Alter FOREIGN SERVER Statements
|
2008-12-19 17:25:19 +01:00
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef struct CreateForeignServerStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *servername; /* server name */
|
|
|
|
char *servertype; /* optional server type */
|
|
|
|
char *version; /* optional server version */
|
|
|
|
char *fdwname; /* FDW name */
|
2017-03-20 21:40:45 +01:00
|
|
|
bool if_not_exists; /* just do nothing if it already exists? */
|
2008-12-19 17:25:19 +01:00
|
|
|
List *options; /* generic options to server */
|
|
|
|
} CreateForeignServerStmt;
|
|
|
|
|
|
|
|
typedef struct AlterForeignServerStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *servername; /* server name */
|
|
|
|
char *version; /* optional server version */
|
|
|
|
List *options; /* generic options to server */
|
|
|
|
bool has_version; /* version specified */
|
|
|
|
} AlterForeignServerStmt;
|
|
|
|
|
2011-01-02 05:48:11 +01:00
|
|
|
/* ----------------------
|
2014-07-10 21:01:31 +02:00
|
|
|
* Create FOREIGN TABLE Statement
|
2011-01-02 05:48:11 +01:00
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef struct CreateForeignTableStmt
|
|
|
|
{
|
|
|
|
CreateStmt base;
|
|
|
|
char *servername;
|
|
|
|
List *options;
|
|
|
|
} CreateForeignTableStmt;
|
|
|
|
|
2008-12-19 17:25:19 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Create/Drop USER MAPPING Statements
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef struct CreateUserMappingStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2016-12-28 18:00:00 +01:00
|
|
|
RoleSpec *user; /* user role */
|
2008-12-19 17:25:19 +01:00
|
|
|
char *servername; /* server name */
|
2017-03-20 21:40:45 +01:00
|
|
|
bool if_not_exists; /* just do nothing if it already exists? */
|
2008-12-19 17:25:19 +01:00
|
|
|
List *options; /* generic options to server */
|
|
|
|
} CreateUserMappingStmt;
|
|
|
|
|
|
|
|
typedef struct AlterUserMappingStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2016-12-28 18:00:00 +01:00
|
|
|
RoleSpec *user; /* user role */
|
2008-12-19 17:25:19 +01:00
|
|
|
char *servername; /* server name */
|
|
|
|
List *options; /* generic options to server */
|
|
|
|
} AlterUserMappingStmt;
|
|
|
|
|
|
|
|
typedef struct DropUserMappingStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2016-12-28 18:00:00 +01:00
|
|
|
RoleSpec *user; /* user role */
|
2008-12-19 17:25:19 +01:00
|
|
|
char *servername; /* server name */
|
|
|
|
bool missing_ok; /* ignore missing mappings */
|
|
|
|
} DropUserMappingStmt;
|
|
|
|
|
2014-07-10 21:01:31 +02:00
|
|
|
/* ----------------------
|
|
|
|
* Import Foreign Schema Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef enum ImportForeignSchemaType
|
|
|
|
{
|
|
|
|
FDW_IMPORT_SCHEMA_ALL, /* all relations wanted */
|
|
|
|
FDW_IMPORT_SCHEMA_LIMIT_TO, /* include only listed tables in import */
|
|
|
|
FDW_IMPORT_SCHEMA_EXCEPT /* exclude listed tables from import */
|
|
|
|
} ImportForeignSchemaType;
|
|
|
|
|
|
|
|
typedef struct ImportForeignSchemaStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *server_name; /* FDW server name */
|
|
|
|
char *remote_schema; /* remote schema name to query */
|
|
|
|
char *local_schema; /* local schema to create objects in */
|
|
|
|
ImportForeignSchemaType list_type; /* type of table list */
|
|
|
|
List *table_list; /* List of RangeVar */
|
|
|
|
List *options; /* list of options to pass to FDW */
|
|
|
|
} ImportForeignSchemaStmt;
|
|
|
|
|
Row-Level Security Policies (RLS)
Building on the updatable security-barrier views work, add the
ability to define policies on tables to limit the set of rows
which are returned from a query and which are allowed to be added
to a table. Expressions defined by the policy for filtering are
added to the security barrier quals of the query, while expressions
defined to check records being added to a table are added to the
with-check options of the query.
New top-level commands are CREATE/ALTER/DROP POLICY and are
controlled by the table owner. Row Security is able to be enabled
and disabled by the owner on a per-table basis using
ALTER TABLE .. ENABLE/DISABLE ROW SECURITY.
Per discussion, ROW SECURITY is disabled on tables by default and
must be enabled for policies on the table to be used. If no
policies exist on a table with ROW SECURITY enabled, a default-deny
policy is used and no records will be visible.
By default, row security is applied at all times except for the
table owner and the superuser. A new GUC, row_security, is added
which can be set to ON, OFF, or FORCE. When set to FORCE, row
security will be applied even for the table owner and superusers.
When set to OFF, row security will be disabled when allowed and an
error will be thrown if the user does not have rights to bypass row
security.
Per discussion, pg_dump sets row_security = OFF by default to ensure
that exports and backups will have all data in the table or will
error if there are insufficient privileges to bypass row security.
A new option has been added to pg_dump, --enable-row-security, to
ask pg_dump to export with row security enabled.
A new role capability, BYPASSRLS, which can only be set by the
superuser, is added to allow other users to be able to bypass row
security using row_security = OFF.
Many thanks to the various individuals who have helped with the
design, particularly Robert Haas for his feedback.
Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean
Rasheed, with additional changes and rework by me.
Reviewers have included all of the above, Greg Smith,
Jeff McCormick, and Robert Haas.
2014-09-19 17:18:35 +02:00
|
|
|
/*----------------------
|
|
|
|
* Create POLICY Statement
|
|
|
|
*----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CreatePolicyStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *policy_name; /* Policy's name */
|
|
|
|
RangeVar *table; /* the table name the policy applies to */
|
2015-08-21 14:22:22 +02:00
|
|
|
char *cmd_name; /* the command name the policy applies to */
|
2016-12-05 21:50:55 +01:00
|
|
|
bool permissive; /* restrictive or permissive policy */
|
Row-Level Security Policies (RLS)
Building on the updatable security-barrier views work, add the
ability to define policies on tables to limit the set of rows
which are returned from a query and which are allowed to be added
to a table. Expressions defined by the policy for filtering are
added to the security barrier quals of the query, while expressions
defined to check records being added to a table are added to the
with-check options of the query.
New top-level commands are CREATE/ALTER/DROP POLICY and are
controlled by the table owner. Row Security is able to be enabled
and disabled by the owner on a per-table basis using
ALTER TABLE .. ENABLE/DISABLE ROW SECURITY.
Per discussion, ROW SECURITY is disabled on tables by default and
must be enabled for policies on the table to be used. If no
policies exist on a table with ROW SECURITY enabled, a default-deny
policy is used and no records will be visible.
By default, row security is applied at all times except for the
table owner and the superuser. A new GUC, row_security, is added
which can be set to ON, OFF, or FORCE. When set to FORCE, row
security will be applied even for the table owner and superusers.
When set to OFF, row security will be disabled when allowed and an
error will be thrown if the user does not have rights to bypass row
security.
Per discussion, pg_dump sets row_security = OFF by default to ensure
that exports and backups will have all data in the table or will
error if there are insufficient privileges to bypass row security.
A new option has been added to pg_dump, --enable-row-security, to
ask pg_dump to export with row security enabled.
A new role capability, BYPASSRLS, which can only be set by the
superuser, is added to allow other users to be able to bypass row
security using row_security = OFF.
Many thanks to the various individuals who have helped with the
design, particularly Robert Haas for his feedback.
Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean
Rasheed, with additional changes and rework by me.
Reviewers have included all of the above, Greg Smith,
Jeff McCormick, and Robert Haas.
2014-09-19 17:18:35 +02:00
|
|
|
List *roles; /* the roles associated with the policy */
|
|
|
|
Node *qual; /* the policy's condition */
|
|
|
|
Node *with_check; /* the policy's WITH CHECK condition. */
|
|
|
|
} CreatePolicyStmt;
|
|
|
|
|
|
|
|
/*----------------------
|
|
|
|
* Alter POLICY Statement
|
|
|
|
*----------------------
|
|
|
|
*/
|
|
|
|
typedef struct AlterPolicyStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *policy_name; /* Policy's name */
|
|
|
|
RangeVar *table; /* the table name the policy applies to */
|
|
|
|
List *roles; /* the roles associated with the policy */
|
|
|
|
Node *qual; /* the policy's condition */
|
|
|
|
Node *with_check; /* the policy's WITH CHECK condition. */
|
|
|
|
} AlterPolicyStmt;
|
|
|
|
|
2016-03-24 03:01:35 +01:00
|
|
|
/*----------------------
|
|
|
|
* Create ACCESS METHOD Statement
|
|
|
|
*----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CreateAmStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *amname; /* access method name */
|
|
|
|
List *handler_name; /* handler function name */
|
|
|
|
char amtype; /* type of access method */
|
|
|
|
} CreateAmStmt;
|
|
|
|
|
1996-08-28 03:59:28 +02:00
|
|
|
/* ----------------------
|
2009-10-15 00:14:25 +02:00
|
|
|
* Create TRIGGER Statement
|
1996-08-28 03:59:28 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct CreateTrigStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-04-09 22:35:55 +02:00
|
|
|
char *trigname; /* TRIGGER's name */
|
|
|
|
RangeVar *relation; /* relation trigger is on */
|
|
|
|
List *funcname; /* qual. name of function to call */
|
|
|
|
List *args; /* list of (T_String) Values or NIL */
|
2002-03-08 05:37:18 +01:00
|
|
|
bool row; /* ROW/STATEMENT */
|
2010-10-10 19:43:33 +02:00
|
|
|
/* timing uses the TRIGGER_TYPE bits defined in catalog/pg_trigger.h */
|
|
|
|
int16 timing; /* BEFORE, AFTER, or INSTEAD */
|
2009-06-18 03:27:02 +02:00
|
|
|
/* events uses the TRIGGER_TYPE bits defined in catalog/pg_trigger.h */
|
2010-10-10 19:43:33 +02:00
|
|
|
int16 events; /* "OR" of INSERT/UPDATE/DELETE/TRUNCATE */
|
2009-10-15 00:14:25 +02:00
|
|
|
List *columns; /* column names, or NIL for all columns */
|
2009-11-20 21:38:12 +01:00
|
|
|
Node *whenClause; /* qual expression, or NULL if none */
|
2009-07-29 22:56:21 +02:00
|
|
|
bool isconstraint; /* This is a constraint trigger */
|
2016-11-04 16:49:50 +01:00
|
|
|
/* explicitly named transition data */
|
|
|
|
List *transitionRels; /* TriggerTransition nodes, or NIL if none */
|
2010-01-17 23:56:23 +01:00
|
|
|
/* The remaining fields are only used for constraint triggers */
|
2002-03-08 05:37:18 +01:00
|
|
|
bool deferrable; /* [NOT] DEFERRABLE */
|
|
|
|
bool initdeferred; /* INITIALLY {DEFERRED|IMMEDIATE} */
|
2009-07-29 22:56:21 +02:00
|
|
|
RangeVar *constrrel; /* opposite relation, if RI trigger */
|
2002-03-08 05:37:18 +01:00
|
|
|
} CreateTrigStmt;
|
|
|
|
|
2002-03-01 23:45:19 +01:00
|
|
|
/* ----------------------
|
2012-07-18 16:16:16 +02:00
|
|
|
* Create EVENT TRIGGER Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CreateEventTrigStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *trigname; /* TRIGGER's name */
|
|
|
|
char *eventname; /* event's identifier */
|
|
|
|
List *whenclause; /* list of DefElems indicating filtering */
|
|
|
|
List *funcname; /* qual. name of function to call */
|
|
|
|
} CreateEventTrigStmt;
|
|
|
|
|
|
|
|
/* ----------------------
|
|
|
|
* Alter EVENT TRIGGER Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct AlterEventTrigStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *trigname; /* TRIGGER's name */
|
2012-08-08 01:02:54 +02:00
|
|
|
char tgenabled; /* trigger's firing configuration WRT
|
2012-07-18 16:16:16 +02:00
|
|
|
* session_replication_role */
|
|
|
|
} AlterEventTrigStmt;
|
|
|
|
|
|
|
|
/* ----------------------
|
2019-02-25 08:38:59 +01:00
|
|
|
* Create LANGUAGE Statements
|
2002-03-01 23:45:19 +01:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct CreatePLangStmt
|
2002-03-01 23:45:19 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2010-02-23 23:51:43 +01:00
|
|
|
bool replace; /* T => replace if already exists */
|
2002-03-08 05:37:18 +01:00
|
|
|
char *plname; /* PL name */
|
2002-04-09 22:35:55 +02:00
|
|
|
List *plhandler; /* PL call handler function (qual. name) */
|
2009-09-23 01:43:43 +02:00
|
|
|
List *plinline; /* optional inline function (qual. name) */
|
2005-10-15 04:49:52 +02:00
|
|
|
List *plvalidator; /* optional validator function (qual. name) */
|
2002-03-08 05:37:18 +01:00
|
|
|
bool pltrusted; /* PL is trusted */
|
|
|
|
} CreatePLangStmt;
|
2002-03-01 23:45:19 +01:00
|
|
|
|
1996-08-28 03:59:28 +02:00
|
|
|
/* ----------------------
|
2005-06-28 07:09:14 +02:00
|
|
|
* Create/Alter/Drop Role Statements
|
2005-07-26 18:38:29 +02:00
|
|
|
*
|
|
|
|
* Note: these node types are also used for the backwards-compatible
|
|
|
|
* Create/Alter/Drop User/Group statements. In the ALTER and DROP cases
|
|
|
|
* there's really no need to distinguish what the original spelling was,
|
|
|
|
* but for CREATE we mark the type because the defaults vary.
|
1996-08-28 03:59:28 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2005-07-26 18:38:29 +02:00
|
|
|
typedef enum RoleStmtType
|
|
|
|
{
|
|
|
|
ROLESTMT_ROLE,
|
|
|
|
ROLESTMT_USER,
|
|
|
|
ROLESTMT_GROUP
|
|
|
|
} RoleStmtType;
|
|
|
|
|
2005-06-28 07:09:14 +02:00
|
|
|
typedef struct CreateRoleStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2005-07-26 18:38:29 +02:00
|
|
|
RoleStmtType stmt_type; /* ROLE/USER/GROUP */
|
2005-06-28 07:09:14 +02:00
|
|
|
char *role; /* role name */
|
2002-03-08 05:37:18 +01:00
|
|
|
List *options; /* List of DefElem nodes */
|
2005-06-28 07:09:14 +02:00
|
|
|
} CreateRoleStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2005-06-28 07:09:14 +02:00
|
|
|
typedef struct AlterRoleStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2016-12-28 18:00:00 +01:00
|
|
|
RoleSpec *role; /* role */
|
2002-03-08 05:37:18 +01:00
|
|
|
List *options; /* List of DefElem nodes */
|
2005-06-28 07:09:14 +02:00
|
|
|
int action; /* +1 = add members, -1 = drop members */
|
|
|
|
} AlterRoleStmt;
|
1997-04-23 07:58:06 +02:00
|
|
|
|
2005-06-28 07:09:14 +02:00
|
|
|
typedef struct AlterRoleSetStmt
|
2002-03-08 05:37:18 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2016-12-28 18:00:00 +01:00
|
|
|
RoleSpec *role; /* role */
|
2009-10-08 00:14:26 +02:00
|
|
|
char *database; /* database name, or NULL */
|
2007-09-03 20:46:30 +02:00
|
|
|
VariableSetStmt *setstmt; /* SET or RESET subcommand */
|
2005-06-28 07:09:14 +02:00
|
|
|
} AlterRoleSetStmt;
|
1997-04-23 07:58:06 +02:00
|
|
|
|
2005-06-28 07:09:14 +02:00
|
|
|
typedef struct DropRoleStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2005-06-28 07:09:14 +02:00
|
|
|
List *roles; /* List of roles to remove */
|
2006-02-04 20:06:47 +01:00
|
|
|
bool missing_ok; /* skip error if a role is missing? */
|
2005-06-28 07:09:14 +02:00
|
|
|
} DropRoleStmt;
|
1997-04-23 07:58:06 +02:00
|
|
|
|
|
|
|
/* ----------------------
|
2003-03-20 08:02:11 +01:00
|
|
|
* {Create|Alter} SEQUENCE Statement
|
1997-04-23 07:58:06 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct CreateSeqStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-21 17:02:16 +01:00
|
|
|
RangeVar *sequence; /* the sequence to create */
|
2002-03-08 05:37:18 +01:00
|
|
|
List *options;
|
2010-08-18 20:35:21 +02:00
|
|
|
Oid ownerId; /* ID of owner, or InvalidOid for default */
|
2017-04-06 14:33:16 +02:00
|
|
|
bool for_identity;
|
2014-08-26 15:05:18 +02:00
|
|
|
bool if_not_exists; /* just do nothing if it already exists? */
|
2002-03-08 05:37:18 +01:00
|
|
|
} CreateSeqStmt;
|
1997-04-02 20:24:52 +02:00
|
|
|
|
2003-03-20 08:02:11 +01:00
|
|
|
typedef struct AlterSeqStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
RangeVar *sequence; /* the sequence to alter */
|
|
|
|
List *options;
|
2017-04-06 14:33:16 +02:00
|
|
|
bool for_identity;
|
2012-01-24 00:25:04 +01:00
|
|
|
bool missing_ok; /* skip error if a role is missing? */
|
2003-08-08 23:42:59 +02:00
|
|
|
} AlterSeqStmt;
|
2003-03-20 08:02:11 +01:00
|
|
|
|
1999-09-29 18:06:40 +02:00
|
|
|
/* ----------------------
|
2003-02-10 05:44:47 +01:00
|
|
|
* Create {Aggregate|Operator|Type} Statement
|
1999-09-29 18:06:40 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct DefineStmt
|
1999-09-29 18:06:40 +02:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2003-06-27 16:45:32 +02:00
|
|
|
ObjectType kind; /* aggregate, operator, type */
|
2006-04-15 19:45:46 +02:00
|
|
|
bool oldstyle; /* hack to signal old CREATE AGG syntax */
|
2002-03-29 20:06:29 +01:00
|
|
|
List *defnames; /* qualified name (list of Value strings) */
|
2006-04-15 19:45:46 +02:00
|
|
|
List *args; /* a list of TypeName (if needed) */
|
2002-03-08 05:37:18 +01:00
|
|
|
List *definition; /* a list of DefElem */
|
2017-02-09 04:51:09 +01:00
|
|
|
bool if_not_exists; /* just do nothing if it already exists? */
|
2019-03-19 02:16:50 +01:00
|
|
|
bool replace; /* replace if already exists? */
|
2002-03-08 05:37:18 +01:00
|
|
|
} DefineStmt;
|
1999-09-29 18:06:40 +02:00
|
|
|
|
2002-03-20 20:45:13 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Create Domain Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CreateDomainStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2002-03-29 20:06:29 +01:00
|
|
|
List *domainname; /* qualified name (list of Value strings) */
|
2009-07-16 08:33:46 +02:00
|
|
|
TypeName *typeName; /* the base type */
|
Remove collation information from TypeName, where it does not belong.
The initial collations patch treated a COLLATE spec as part of a TypeName,
following what can only be described as brain fade on the part of the SQL
committee. It's a lot more reasonable to treat COLLATE as a syntactically
separate object, so that it can be added in only the productions where it
actually belongs, rather than needing to reject it in a boatload of places
where it doesn't belong (something the original patch mostly failed to do).
In addition this change lets us meet the spec's requirement to allow
COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly
behavior for constructs such as "foo::type COLLATE collation".
To do this, pull collation information out of TypeName and put it in
ColumnDef instead, thus reverting most of the collation-related changes in
parse_type.c's API. I made one additional structural change, which was to
use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd
nodes. This provides enough room to get rid of the "transform" wart in
AlterTableCmd too, since the ColumnDef can carry the USING expression
easily enough.
Also fix some other minor bugs that have crept in in the same areas,
like failure to copy recently-added fields of ColumnDef in copyfuncs.c.
While at it, document the formerly secret ability to specify a collation
in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and
ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about
what the default collation selection will be when COLLATE is omitted.
BTW, the three-parameter form of format_type() should go away too,
since it just contributes to the confusion in this area; but I'll do
that in a separate patch.
2011-03-10 04:38:52 +01:00
|
|
|
CollateClause *collClause; /* untransformed COLLATE spec, if any */
|
2002-03-29 20:06:29 +01:00
|
|
|
List *constraints; /* constraints (list of Constraint nodes) */
|
2002-03-20 20:45:13 +01:00
|
|
|
} CreateDomainStmt;
|
|
|
|
|
2002-07-30 00:14:11 +02:00
|
|
|
/* ----------------------
|
|
|
|
* Create Operator Class Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CreateOpClassStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *opclassname; /* qualified name (list of Value strings) */
|
2006-12-23 01:43:13 +01:00
|
|
|
List *opfamilyname; /* qualified name (ditto); NIL if omitted */
|
2002-07-30 00:14:11 +02:00
|
|
|
char *amname; /* name of index AM opclass is for */
|
|
|
|
TypeName *datatype; /* datatype of indexed column */
|
|
|
|
List *items; /* List of CreateOpClassItem nodes */
|
|
|
|
bool isDefault; /* Should be marked as default for type? */
|
|
|
|
} CreateOpClassStmt;
|
|
|
|
|
|
|
|
#define OPCLASS_ITEM_OPERATOR 1
|
|
|
|
#define OPCLASS_ITEM_FUNCTION 2
|
|
|
|
#define OPCLASS_ITEM_STORAGETYPE 3
|
|
|
|
|
|
|
|
typedef struct CreateOpClassItem
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
int itemtype; /* see codes above */
|
2016-12-28 18:00:00 +01:00
|
|
|
ObjectWithArgs *name; /* operator or function name and args */
|
2002-07-30 00:14:11 +02:00
|
|
|
int number; /* strategy num or support proc num */
|
2010-11-24 20:20:39 +01:00
|
|
|
List *order_family; /* only used for ordering operators */
|
2016-12-28 18:00:00 +01:00
|
|
|
List *class_args; /* amproclefttype/amprocrighttype or
|
|
|
|
* amoplefttype/amoprighttype */
|
2002-07-30 00:14:11 +02:00
|
|
|
/* fields used for a storagetype item: */
|
|
|
|
TypeName *storedtype; /* datatype stored in index */
|
|
|
|
} CreateOpClassItem;
|
|
|
|
|
2007-01-23 06:07:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Create Operator Family Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CreateOpFamilyStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *opfamilyname; /* qualified name (list of Value strings) */
|
|
|
|
char *amname; /* name of index AM opfamily is for */
|
2007-11-15 23:25:18 +01:00
|
|
|
} CreateOpFamilyStmt;
|
2007-01-23 06:07:18 +01:00
|
|
|
|
|
|
|
/* ----------------------
|
|
|
|
* Alter Operator Family Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct AlterOpFamilyStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *opfamilyname; /* qualified name (list of Value strings) */
|
|
|
|
char *amname; /* name of index AM opfamily is for */
|
|
|
|
bool isDrop; /* ADD or DROP the items? */
|
|
|
|
List *items; /* List of CreateOpClassItem nodes */
|
2007-11-15 23:25:18 +01:00
|
|
|
} AlterOpFamilyStmt;
|
2007-01-23 06:07:18 +01:00
|
|
|
|
2000-02-18 10:30:20 +01:00
|
|
|
/* ----------------------
|
2002-07-18 18:47:26 +02:00
|
|
|
* Drop Table|Sequence|View|Index|Type|Domain|Conversion|Schema Statement
|
2000-02-18 10:30:20 +01:00
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct DropStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
Remove objname/objargs split for referring to objects
In simpler times, it might have worked to refer to all kinds of objects
by a list of name components and an optional argument list. But this
doesn't work for all objects, which has resulted in a collection of
hacks to place various other nodes types into these fields, which have
to be unpacked at the other end. This makes it also weird to represent
lists of such things in the grammar, because they would have to be lists
of singleton lists, to make the unpacking work consistently. The other
problem is that keeping separate name and args fields makes it awkward
to deal with lists of functions.
Change that by dropping the objargs field and have objname, renamed to
object, be a generic Node, which can then be flexibly assigned and
managed using the normal Node mechanisms. In many cases it will still
be a List of names, in some cases it will be a string Value, for types
it will be the existing Typename, for functions it will now use the
existing ObjectWithArgs node type. Some of the more obscure object
types still use somewhat arbitrary nested lists.
Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-12 18:00:00 +01:00
|
|
|
List *objects; /* list of names */
|
2003-06-27 16:45:32 +02:00
|
|
|
ObjectType removeType; /* object type */
|
2002-07-01 17:27:56 +02:00
|
|
|
DropBehavior behavior; /* RESTRICT or CASCADE behavior */
|
2005-11-22 19:17:34 +01:00
|
|
|
bool missing_ok; /* skip error if object is missing? */
|
2012-04-06 11:21:40 +02:00
|
|
|
bool concurrent; /* drop index concurrently? */
|
2002-03-08 05:37:18 +01:00
|
|
|
} DropStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
|
|
|
/* ----------------------
|
2002-03-08 05:37:18 +01:00
|
|
|
* Truncate Table Statement
|
1996-08-28 03:59:28 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct TruncateStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2005-01-27 04:19:37 +01:00
|
|
|
List *relations; /* relations (RangeVars) to be truncated */
|
2008-05-17 01:36:05 +02:00
|
|
|
bool restart_seqs; /* restart owned sequences? */
|
2006-03-03 04:30:54 +01:00
|
|
|
DropBehavior behavior; /* RESTRICT or CASCADE behavior */
|
2002-03-08 05:37:18 +01:00
|
|
|
} TruncateStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
|
|
|
/* ----------------------
|
2002-03-08 05:37:18 +01:00
|
|
|
* Comment On Statement
|
1996-08-28 03:59:28 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct CommentStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2003-06-27 16:45:32 +02:00
|
|
|
ObjectType objtype; /* Object's type */
|
Remove objname/objargs split for referring to objects
In simpler times, it might have worked to refer to all kinds of objects
by a list of name components and an optional argument list. But this
doesn't work for all objects, which has resulted in a collection of
hacks to place various other nodes types into these fields, which have
to be unpacked at the other end. This makes it also weird to represent
lists of such things in the grammar, because they would have to be lists
of singleton lists, to make the unpacking work consistently. The other
problem is that keeping separate name and args fields makes it awkward
to deal with lists of functions.
Change that by dropping the objargs field and have objname, renamed to
object, be a generic Node, which can then be flexibly assigned and
managed using the normal Node mechanisms. In many cases it will still
be a List of names, in some cases it will be a string Value, for types
it will be the existing Typename, for functions it will now use the
existing ObjectWithArgs node type. Some of the more obscure object
types still use somewhat arbitrary nested lists.
Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-12 18:00:00 +01:00
|
|
|
Node *object; /* Qualified name of the object */
|
2002-04-09 22:35:55 +02:00
|
|
|
char *comment; /* Comment to insert, or NULL to remove */
|
2002-03-08 05:37:18 +01:00
|
|
|
} CommentStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2010-09-28 02:55:27 +02:00
|
|
|
/* ----------------------
|
|
|
|
* SECURITY LABEL Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct SecLabelStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
ObjectType objtype; /* Object's type */
|
Remove objname/objargs split for referring to objects
In simpler times, it might have worked to refer to all kinds of objects
by a list of name components and an optional argument list. But this
doesn't work for all objects, which has resulted in a collection of
hacks to place various other nodes types into these fields, which have
to be unpacked at the other end. This makes it also weird to represent
lists of such things in the grammar, because they would have to be lists
of singleton lists, to make the unpacking work consistently. The other
problem is that keeping separate name and args fields makes it awkward
to deal with lists of functions.
Change that by dropping the objargs field and have objname, renamed to
object, be a generic Node, which can then be flexibly assigned and
managed using the normal Node mechanisms. In many cases it will still
be a List of names, in some cases it will be a string Value, for types
it will be the existing Typename, for functions it will now use the
existing ObjectWithArgs node type. Some of the more obscure object
types still use somewhat arbitrary nested lists.
Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-12 18:00:00 +01:00
|
|
|
Node *object; /* Qualified name of the object */
|
2010-09-28 02:55:27 +02:00
|
|
|
char *provider; /* Label provider (or NULL) */
|
|
|
|
char *label; /* New security label to be assigned */
|
|
|
|
} SecLabelStmt;
|
|
|
|
|
1996-08-28 03:59:28 +02:00
|
|
|
/* ----------------------
|
2003-03-10 04:53:52 +01:00
|
|
|
* Declare Cursor Statement
|
2007-04-28 00:05:49 +02:00
|
|
|
*
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
|
|
|
* The "query" field is initially a raw parse tree, and is converted to a
|
|
|
|
* Query node during parse analysis. Note that rewriting and planning
|
|
|
|
* of the query are always postponed until execution.
|
2003-03-10 04:53:52 +01:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2007-11-15 22:14:46 +01:00
|
|
|
#define CURSOR_OPT_BINARY 0x0001 /* BINARY */
|
|
|
|
#define CURSOR_OPT_SCROLL 0x0002 /* SCROLL explicitly given */
|
|
|
|
#define CURSOR_OPT_NO_SCROLL 0x0004 /* NO SCROLL explicitly given */
|
|
|
|
#define CURSOR_OPT_INSENSITIVE 0x0008 /* INSENSITIVE */
|
|
|
|
#define CURSOR_OPT_HOLD 0x0010 /* WITH HOLD */
|
2011-09-16 06:42:53 +02:00
|
|
|
/* these planner-control flags do not correspond to any SQL grammar: */
|
2007-11-15 22:14:46 +01:00
|
|
|
#define CURSOR_OPT_FAST_PLAN 0x0020 /* prefer fast-start plan */
|
2012-06-10 21:20:04 +02:00
|
|
|
#define CURSOR_OPT_GENERIC_PLAN 0x0040 /* force use of generic plan */
|
2011-09-16 06:42:53 +02:00
|
|
|
#define CURSOR_OPT_CUSTOM_PLAN 0x0080 /* force use of custom plan */
|
2015-09-16 21:38:47 +02:00
|
|
|
#define CURSOR_OPT_PARALLEL_OK 0x0100 /* parallel mode OK */
|
2003-03-10 04:53:52 +01:00
|
|
|
|
|
|
|
typedef struct DeclareCursorStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *portalname; /* name of the portal (cursor) */
|
|
|
|
int options; /* bitmask of options (see above) */
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
|
|
|
Node *query; /* the query (see comments above) */
|
2003-08-08 23:42:59 +02:00
|
|
|
} DeclareCursorStmt;
|
2003-03-10 04:53:52 +01:00
|
|
|
|
|
|
|
/* ----------------------
|
|
|
|
* Close Portal Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct ClosePortalStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *portalname; /* name of the portal (cursor) */
|
2007-11-15 22:14:46 +01:00
|
|
|
/* NULL means CLOSE ALL */
|
2003-03-10 04:53:52 +01:00
|
|
|
} ClosePortalStmt;
|
|
|
|
|
|
|
|
/* ----------------------
|
|
|
|
* Fetch Statement (also Move)
|
1996-08-28 03:59:28 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2003-02-10 05:44:47 +01:00
|
|
|
typedef enum FetchDirection
|
|
|
|
{
|
2003-03-11 20:40:24 +01:00
|
|
|
/* for these, howMany is how many rows to fetch; FETCH_ALL means ALL */
|
2003-02-10 05:44:47 +01:00
|
|
|
FETCH_FORWARD,
|
2003-03-11 20:40:24 +01:00
|
|
|
FETCH_BACKWARD,
|
|
|
|
/* for these, howMany indicates a position; only one row is fetched */
|
|
|
|
FETCH_ABSOLUTE,
|
|
|
|
FETCH_RELATIVE
|
2003-08-08 23:42:59 +02:00
|
|
|
} FetchDirection;
|
2003-02-10 05:44:47 +01:00
|
|
|
|
2006-09-03 03:15:40 +02:00
|
|
|
#define FETCH_ALL LONG_MAX
|
2003-03-11 20:40:24 +01:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct FetchStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2003-02-10 05:44:47 +01:00
|
|
|
FetchDirection direction; /* see above */
|
2006-09-03 05:19:45 +02:00
|
|
|
long howMany; /* number of rows, or position argument */
|
2002-03-08 05:37:18 +01:00
|
|
|
char *portalname; /* name of portal (cursor) */
|
2017-08-16 06:22:32 +02:00
|
|
|
bool ismove; /* true if MOVE */
|
2002-03-08 05:37:18 +01:00
|
|
|
} FetchStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2000-10-05 21:11:39 +02:00
|
|
|
/* ----------------------
|
2002-03-08 05:37:18 +01:00
|
|
|
* Create Index Statement
|
2011-01-25 21:42:03 +01:00
|
|
|
*
|
|
|
|
* This represents creation of an index and/or an associated constraint.
|
Avoid pre-determining index names during CREATE TABLE LIKE parsing.
Formerly, when trying to copy both indexes and comments, CREATE TABLE LIKE
had to pre-assign names to indexes that had comments, because it made up an
explicit CommentStmt command to apply the comment and so it had to know the
name for the index. This creates bad interactions with other indexes, as
shown in bug #6734 from Daniele Varrazzo: the preassignment logic couldn't
take any other indexes into account so it could choose a conflicting name.
To fix, add a field to IndexStmt that allows it to carry a comment to be
assigned to the new index. (This isn't a user-exposed feature of CREATE
INDEX, only an internal option.) Now we don't need preassignment of index
names in any situation.
I also took the opportunity to refactor DefineIndex to accept the IndexStmt
as such, rather than passing all its fields individually in a mile-long
parameter list.
Back-patch to 9.2, but no further, because it seems too dangerous to change
IndexStmt or DefineIndex's API in released branches. The bug exists back
to 9.0 where CREATE TABLE LIKE grew the ability to copy comments, but given
the lack of prior complaints we'll just let it go unfixed before 9.2.
2012-07-16 19:25:18 +02:00
|
|
|
* If isconstraint is true, we should create a pg_constraint entry along
|
2014-05-06 18:12:18 +02:00
|
|
|
* with the index. But if indexOid isn't InvalidOid, we are not creating an
|
Avoid pre-determining index names during CREATE TABLE LIKE parsing.
Formerly, when trying to copy both indexes and comments, CREATE TABLE LIKE
had to pre-assign names to indexes that had comments, because it made up an
explicit CommentStmt command to apply the comment and so it had to know the
name for the index. This creates bad interactions with other indexes, as
shown in bug #6734 from Daniele Varrazzo: the preassignment logic couldn't
take any other indexes into account so it could choose a conflicting name.
To fix, add a field to IndexStmt that allows it to carry a comment to be
assigned to the new index. (This isn't a user-exposed feature of CREATE
INDEX, only an internal option.) Now we don't need preassignment of index
names in any situation.
I also took the opportunity to refactor DefineIndex to accept the IndexStmt
as such, rather than passing all its fields individually in a mile-long
parameter list.
Back-patch to 9.2, but no further, because it seems too dangerous to change
IndexStmt or DefineIndex's API in released branches. The bug exists back
to 9.0 where CREATE TABLE LIKE grew the ability to copy comments, but given
the lack of prior complaints we'll just let it go unfixed before 9.2.
2012-07-16 19:25:18 +02:00
|
|
|
* index, just a UNIQUE/PKEY constraint using an existing index. isconstraint
|
|
|
|
* must always be true in this case, and the fields describing the index
|
|
|
|
* properties are empty.
|
2000-10-05 21:11:39 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct IndexStmt
|
2000-10-05 21:11:39 +02:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2004-05-05 06:48:48 +02:00
|
|
|
char *idxname; /* name of new index, or NULL for default */
|
2002-03-21 17:02:16 +01:00
|
|
|
RangeVar *relation; /* relation to build index on */
|
2002-03-08 05:37:18 +01:00
|
|
|
char *accessMethod; /* name of access method (eg. btree) */
|
2008-02-07 18:09:51 +01:00
|
|
|
char *tableSpace; /* tablespace, or NULL for default */
|
Avoid pre-determining index names during CREATE TABLE LIKE parsing.
Formerly, when trying to copy both indexes and comments, CREATE TABLE LIKE
had to pre-assign names to indexes that had comments, because it made up an
explicit CommentStmt command to apply the comment and so it had to know the
name for the index. This creates bad interactions with other indexes, as
shown in bug #6734 from Daniele Varrazzo: the preassignment logic couldn't
take any other indexes into account so it could choose a conflicting name.
To fix, add a field to IndexStmt that allows it to carry a comment to be
assigned to the new index. (This isn't a user-exposed feature of CREATE
INDEX, only an internal option.) Now we don't need preassignment of index
names in any situation.
I also took the opportunity to refactor DefineIndex to accept the IndexStmt
as such, rather than passing all its fields individually in a mile-long
parameter list.
Back-patch to 9.2, but no further, because it seems too dangerous to change
IndexStmt or DefineIndex's API in released branches. The bug exists back
to 9.0 where CREATE TABLE LIKE grew the ability to copy comments, but given
the lack of prior complaints we'll just let it go unfixed before 9.2.
2012-07-16 19:25:18 +02:00
|
|
|
List *indexParams; /* columns to index: a list of IndexElem */
|
2018-04-07 22:00:39 +02:00
|
|
|
List *indexIncludingParams; /* additional columns to index: a list
|
|
|
|
* of IndexElem */
|
Avoid pre-determining index names during CREATE TABLE LIKE parsing.
Formerly, when trying to copy both indexes and comments, CREATE TABLE LIKE
had to pre-assign names to indexes that had comments, because it made up an
explicit CommentStmt command to apply the comment and so it had to know the
name for the index. This creates bad interactions with other indexes, as
shown in bug #6734 from Daniele Varrazzo: the preassignment logic couldn't
take any other indexes into account so it could choose a conflicting name.
To fix, add a field to IndexStmt that allows it to carry a comment to be
assigned to the new index. (This isn't a user-exposed feature of CREATE
INDEX, only an internal option.) Now we don't need preassignment of index
names in any situation.
I also took the opportunity to refactor DefineIndex to accept the IndexStmt
as such, rather than passing all its fields individually in a mile-long
parameter list.
Back-patch to 9.2, but no further, because it seems too dangerous to change
IndexStmt or DefineIndex's API in released branches. The bug exists back
to 9.0 where CREATE TABLE LIKE grew the ability to copy comments, but given
the lack of prior complaints we'll just let it go unfixed before 9.2.
2012-07-16 19:25:18 +02:00
|
|
|
List *options; /* WITH clause options: a list of DefElem */
|
2002-03-08 05:37:18 +01:00
|
|
|
Node *whereClause; /* qualification (partial-index predicate) */
|
2010-02-26 03:01:40 +01:00
|
|
|
List *excludeOpNames; /* exclusion operator names, or NIL if none */
|
Avoid pre-determining index names during CREATE TABLE LIKE parsing.
Formerly, when trying to copy both indexes and comments, CREATE TABLE LIKE
had to pre-assign names to indexes that had comments, because it made up an
explicit CommentStmt command to apply the comment and so it had to know the
name for the index. This creates bad interactions with other indexes, as
shown in bug #6734 from Daniele Varrazzo: the preassignment logic couldn't
take any other indexes into account so it could choose a conflicting name.
To fix, add a field to IndexStmt that allows it to carry a comment to be
assigned to the new index. (This isn't a user-exposed feature of CREATE
INDEX, only an internal option.) Now we don't need preassignment of index
names in any situation.
I also took the opportunity to refactor DefineIndex to accept the IndexStmt
as such, rather than passing all its fields individually in a mile-long
parameter list.
Back-patch to 9.2, but no further, because it seems too dangerous to change
IndexStmt or DefineIndex's API in released branches. The bug exists back
to 9.0 where CREATE TABLE LIKE grew the ability to copy comments, but given
the lack of prior complaints we'll just let it go unfixed before 9.2.
2012-07-16 19:25:18 +02:00
|
|
|
char *idxcomment; /* comment to apply to index, or NULL */
|
2011-01-25 21:42:03 +01:00
|
|
|
Oid indexOid; /* OID of an existing index, if any */
|
Avoid pre-determining index names during CREATE TABLE LIKE parsing.
Formerly, when trying to copy both indexes and comments, CREATE TABLE LIKE
had to pre-assign names to indexes that had comments, because it made up an
explicit CommentStmt command to apply the comment and so it had to know the
name for the index. This creates bad interactions with other indexes, as
shown in bug #6734 from Daniele Varrazzo: the preassignment logic couldn't
take any other indexes into account so it could choose a conflicting name.
To fix, add a field to IndexStmt that allows it to carry a comment to be
assigned to the new index. (This isn't a user-exposed feature of CREATE
INDEX, only an internal option.) Now we don't need preassignment of index
names in any situation.
I also took the opportunity to refactor DefineIndex to accept the IndexStmt
as such, rather than passing all its fields individually in a mile-long
parameter list.
Back-patch to 9.2, but no further, because it seems too dangerous to change
IndexStmt or DefineIndex's API in released branches. The bug exists back
to 9.0 where CREATE TABLE LIKE grew the ability to copy comments, but given
the lack of prior complaints we'll just let it go unfixed before 9.2.
2012-07-16 19:25:18 +02:00
|
|
|
Oid oldNode; /* relfilenode of existing storage, if any */
|
2002-03-08 05:37:18 +01:00
|
|
|
bool unique; /* is index unique? */
|
Avoid pre-determining index names during CREATE TABLE LIKE parsing.
Formerly, when trying to copy both indexes and comments, CREATE TABLE LIKE
had to pre-assign names to indexes that had comments, because it made up an
explicit CommentStmt command to apply the comment and so it had to know the
name for the index. This creates bad interactions with other indexes, as
shown in bug #6734 from Daniele Varrazzo: the preassignment logic couldn't
take any other indexes into account so it could choose a conflicting name.
To fix, add a field to IndexStmt that allows it to carry a comment to be
assigned to the new index. (This isn't a user-exposed feature of CREATE
INDEX, only an internal option.) Now we don't need preassignment of index
names in any situation.
I also took the opportunity to refactor DefineIndex to accept the IndexStmt
as such, rather than passing all its fields individually in a mile-long
parameter list.
Back-patch to 9.2, but no further, because it seems too dangerous to change
IndexStmt or DefineIndex's API in released branches. The bug exists back
to 9.0 where CREATE TABLE LIKE grew the ability to copy comments, but given
the lack of prior complaints we'll just let it go unfixed before 9.2.
2012-07-16 19:25:18 +02:00
|
|
|
bool primary; /* is index a primary key? */
|
|
|
|
bool isconstraint; /* is it for a pkey/unique constraint? */
|
2009-07-29 22:56:21 +02:00
|
|
|
bool deferrable; /* is the constraint DEFERRABLE? */
|
|
|
|
bool initdeferred; /* is the constraint INITIALLY DEFERRED? */
|
Get rid of multiple applications of transformExpr() to the same tree.
transformExpr() has for many years had provisions to do nothing when
applied to an already-transformed expression tree. However, this was
always ugly and of dubious reliability, so we'd be much better off without
it. The primary historical reason for it was that gram.y sometimes
returned multiple links to the same subexpression, which is no longer true
as of my BETWEEN fixes. We'd also grown some lazy hacks in CREATE TABLE
LIKE (failing to distinguish between raw and already-transformed index
specifications) and one or two other places.
This patch removes the need for and support for re-transforming already
transformed expressions. The index case is dealt with by adding a flag
to struct IndexStmt to indicate that it's already been transformed;
which has some benefit anyway in that tablecmds.c can now Assert that
transformation has happened rather than just assuming. The other main
reason was some rather sloppy code for array type coercion, which can
be fixed (and its performance improved too) by refactoring.
I did leave transformJoinUsingClause() still constructing expressions
containing untransformed operator nodes being applied to Vars, so that
transformExpr() still has to allow Var inputs. But that's a much narrower,
and safer, special case than before, since Vars will never appear in a raw
parse tree, and they don't have any substructure to worry about.
In passing fix some oversights in the patch that added CREATE INDEX
IF NOT EXISTS (missing processing of IndexStmt.if_not_exists). These
appear relatively harmless, but still sloppy coding practice.
2015-02-22 19:59:09 +01:00
|
|
|
bool transformed; /* true when transformIndexStmt is finished */
|
2006-08-25 06:06:58 +02:00
|
|
|
bool concurrent; /* should this be a concurrent index build? */
|
2014-11-06 10:48:33 +01:00
|
|
|
bool if_not_exists; /* just do nothing if index already exists? */
|
Fix tablespace inheritance for partitioned rels
Commit ca4103025dfe left a few loose ends. The most important one
(broken pg_dump output) is already fixed by virtue of commit
3b23552ad8bb, but some things remained:
* When ALTER TABLE rewrites tables, the indexes must remain in the
tablespace they were originally in. This didn't work because
index recreation during ALTER TABLE runs manufactured SQL (yuck),
which runs afoul of default_tablespace in competition with the parent
relation tablespace. To fix, reset default_tablespace to the empty
string temporarily, and add the TABLESPACE clause as appropriate.
* Setting a partitioned rel's tablespace to the database default is
confusing; if it worked, it would direct the partitions to that
tablespace regardless of default_tablespace. But in reality it does
not work, and making it work is a larger project. Therefore, throw
an error when this condition is detected, to alert the unwary.
Add some docs and tests, too.
Author: Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f_1c260nOt_vBJ067AZ3JXptXVRohDVMLEBmudX1YEx-A@mail.gmail.com
2019-04-25 16:20:23 +02:00
|
|
|
bool reset_default_tblspc; /* reset default_tablespace prior to
|
|
|
|
* executing */
|
2002-03-08 05:37:18 +01:00
|
|
|
} IndexStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
Implement multivariate n-distinct coefficients
Add support for explicitly declared statistic objects (CREATE
STATISTICS), allowing collection of statistics on more complex
combinations that individual table columns. Companion commands DROP
STATISTICS and ALTER STATISTICS ... OWNER TO / SET SCHEMA / RENAME are
added too. All this DDL has been designed so that more statistic types
can be added later on, such as multivariate most-common-values and
multivariate histograms between columns of a single table, leaving room
for permitting columns on multiple tables, too, as well as expressions.
This commit only adds support for collection of n-distinct coefficient
on user-specified sets of columns in a single table. This is useful to
estimate number of distinct groups in GROUP BY and DISTINCT clauses;
estimation errors there can cause over-allocation of memory in hashed
aggregates, for instance, so it's a worthwhile problem to solve. A new
special pseudo-type pg_ndistinct is used.
(num-distinct estimation was deemed sufficiently useful by itself that
this is worthwhile even if no further statistic types are added
immediately; so much so that another version of essentially the same
functionality was submitted by Kyotaro Horiguchi:
https://postgr.es/m/20150828.173334.114731693.horiguchi.kyotaro@lab.ntt.co.jp
though this commit does not use that code.)
Author: Tomas Vondra. Some code rework by Álvaro.
Reviewed-by: Dean Rasheed, David Rowley, Kyotaro Horiguchi, Jeff Janes,
Ideriha Takeshi
Discussion: https://postgr.es/m/543AFA15.4080608@fuzzy.cz
https://postgr.es/m/20170320190220.ixlaueanxegqd5gr@alvherre.pgsql
2017-03-24 18:06:10 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Create Statistics Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CreateStatsStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *defnames; /* qualified name (list of Value strings) */
|
Change CREATE STATISTICS syntax
Previously, we had the WITH clause in the middle of the command, where
you'd specify both generic options as well as statistic types. Few
people liked this, so this commit changes it to remove the WITH keyword
from that clause and makes it accept statistic types only. (We
currently don't have any generic options, but if we invent in the
future, we will gain a new WITH clause, probably at the end of the
command).
Also, the column list is now specified without parens, which makes the
whole command look more similar to a SELECT command. This change will
let us expand the command to supporting expressions (not just columns
names) as well as multiple tables and their join conditions.
Tom added lots of code comments and fixed some parts of the CREATE
STATISTICS reference page, too; more changes in this area are
forthcoming. He also fixed a potential problem in the alter_generic
regression test, reducing verbosity on a cascaded drop to avoid
dependency on message ordering, as we do in other tests.
Tom also closed a security bug: we documented that table ownership was
required in order to create a statistics object on it, but didn't
actually implement it.
Implement tab-completion for statistics objects. This can stand some
more improvement.
Authors: Alvaro Herrera, with lots of cleanup by Tom Lane
Discussion: https://postgr.es/m/20170420212426.ltvgyhnefvhixm6i@alvherre.pgsql
2017-05-12 19:59:23 +02:00
|
|
|
List *stat_types; /* stat types (list of Value strings) */
|
|
|
|
List *exprs; /* expressions to build statistics on */
|
|
|
|
List *relations; /* rels to build stats on (list of RangeVar) */
|
Clone extended stats in CREATE TABLE (LIKE INCLUDING ALL)
The LIKE INCLUDING ALL clause to CREATE TABLE intuitively indicates
cloning of extended statistics on the source table, but it failed to do
so. Patch it up so that it does. Also include an INCLUDING STATISTICS
option to the LIKE clause, so that the behavior can be requested
individually, or excluded individually.
While at it, reorder the INCLUDING options, both in code and in docs, in
alphabetical order which makes more sense than feature-implementation
order that was previously used.
Backpatch this to Postgres 10, where extended statistics were
introduced, because this is seen as an oversight in a fresh feature
which is better to get consistent from the get-go instead of changing
only in pg11.
In pg11, comments on statistics objects are cloned too. In pg10 they
are not, because I (Álvaro) was too coward to change the parse node as
required to support it. Also, in pg10 I chose not to renumber the
parser symbols for the various INCLUDING options in LIKE, for the same
reason. Any corresponding user-visible changes (docs) are backpatched,
though.
Reported-by: Stephen Froehlich
Author: David Rowley
Reviewed-by: Álvaro Herrera, Tomas Vondra
Discussion: https://postgr.es/m/CY1PR0601MB1927315B45667A1B679D0FD5E5EF0@CY1PR0601MB1927.namprd06.prod.outlook.com
2018-03-05 23:37:19 +01:00
|
|
|
char *stxcomment; /* comment to apply to stats, or NULL */
|
Change CREATE STATISTICS syntax
Previously, we had the WITH clause in the middle of the command, where
you'd specify both generic options as well as statistic types. Few
people liked this, so this commit changes it to remove the WITH keyword
from that clause and makes it accept statistic types only. (We
currently don't have any generic options, but if we invent in the
future, we will gain a new WITH clause, probably at the end of the
command).
Also, the column list is now specified without parens, which makes the
whole command look more similar to a SELECT command. This change will
let us expand the command to supporting expressions (not just columns
names) as well as multiple tables and their join conditions.
Tom added lots of code comments and fixed some parts of the CREATE
STATISTICS reference page, too; more changes in this area are
forthcoming. He also fixed a potential problem in the alter_generic
regression test, reducing verbosity on a cascaded drop to avoid
dependency on message ordering, as we do in other tests.
Tom also closed a security bug: we documented that table ownership was
required in order to create a statistics object on it, but didn't
actually implement it.
Implement tab-completion for statistics objects. This can stand some
more improvement.
Authors: Alvaro Herrera, with lots of cleanup by Tom Lane
Discussion: https://postgr.es/m/20170420212426.ltvgyhnefvhixm6i@alvherre.pgsql
2017-05-12 19:59:23 +02:00
|
|
|
bool if_not_exists; /* do nothing if stats name already exists */
|
Implement multivariate n-distinct coefficients
Add support for explicitly declared statistic objects (CREATE
STATISTICS), allowing collection of statistics on more complex
combinations that individual table columns. Companion commands DROP
STATISTICS and ALTER STATISTICS ... OWNER TO / SET SCHEMA / RENAME are
added too. All this DDL has been designed so that more statistic types
can be added later on, such as multivariate most-common-values and
multivariate histograms between columns of a single table, leaving room
for permitting columns on multiple tables, too, as well as expressions.
This commit only adds support for collection of n-distinct coefficient
on user-specified sets of columns in a single table. This is useful to
estimate number of distinct groups in GROUP BY and DISTINCT clauses;
estimation errors there can cause over-allocation of memory in hashed
aggregates, for instance, so it's a worthwhile problem to solve. A new
special pseudo-type pg_ndistinct is used.
(num-distinct estimation was deemed sufficiently useful by itself that
this is worthwhile even if no further statistic types are added
immediately; so much so that another version of essentially the same
functionality was submitted by Kyotaro Horiguchi:
https://postgr.es/m/20150828.173334.114731693.horiguchi.kyotaro@lab.ntt.co.jp
though this commit does not use that code.)
Author: Tomas Vondra. Some code rework by Álvaro.
Reviewed-by: Dean Rasheed, David Rowley, Kyotaro Horiguchi, Jeff Janes,
Ideriha Takeshi
Discussion: https://postgr.es/m/543AFA15.4080608@fuzzy.cz
https://postgr.es/m/20170320190220.ixlaueanxegqd5gr@alvherre.pgsql
2017-03-24 18:06:10 +01:00
|
|
|
} CreateStatsStmt;
|
|
|
|
|
Allow setting statistics target for extended statistics
When building statistics, we need to decide how many rows to sample and
how accurate the resulting statistics should be. Until now, it was not
possible to explicitly define statistics target for extended statistics
objects, the value was always computed from the per-attribute targets
with a fallback to the system-wide default statistics target.
That's a bit inconvenient, as it ties together the statistics target set
for per-column and extended statistics. In some cases it may be useful
to require larger sample / higher accuracy for extended statics (or the
other way around), but with this approach that's not possible.
So this commit introduces a new command, allowing to specify statistics
target for individual extended statistics objects, overriding the value
derived from per-attribute targets (and the system default).
ALTER STATISTICS stat_name SET STATISTICS target_value;
When determining statistics target for an extended statistics object we
first look at this explicitly set value. When this value is -1, we fall
back to the old formula, looking at the per-attribute targets first and
then the system default. This means the behavior is backwards compatible
with older PostgreSQL releases.
Author: Tomas Vondra
Discussion: https://postgr.es/m/20190618213357.vli3i23vpkset2xd@development
Reviewed-by: Kirk Jamison, Dean Rasheed
2019-09-10 20:09:27 +02:00
|
|
|
/* ----------------------
|
|
|
|
* Alter Statistics Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct AlterStatsStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *defnames; /* qualified name (list of Value strings) */
|
|
|
|
int stxstattarget; /* statistics target */
|
|
|
|
bool missing_ok; /* skip error if statistics object is missing */
|
|
|
|
} AlterStatsStmt;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Create Function Statement
|
|
|
|
* ----------------------
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2002-05-17 20:32:52 +02:00
|
|
|
typedef struct CreateFunctionStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2018-01-26 18:25:44 +01:00
|
|
|
bool is_procedure; /* it's really CREATE PROCEDURE */
|
2002-03-08 05:37:18 +01:00
|
|
|
bool replace; /* T => replace if already exists */
|
2002-04-09 22:35:55 +02:00
|
|
|
List *funcname; /* qualified name of function to create */
|
2004-01-07 00:55:19 +01:00
|
|
|
List *parameters; /* a list of FunctionParameter */
|
2002-03-29 20:06:29 +01:00
|
|
|
TypeName *returnType; /* the return type */
|
2002-05-17 20:32:52 +02:00
|
|
|
List *options; /* a list of DefElem */
|
|
|
|
} CreateFunctionStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2005-03-29 19:58:51 +02:00
|
|
|
typedef enum FunctionParameterMode
|
|
|
|
{
|
|
|
|
/* the assigned enum values appear in pg_proc, don't change 'em! */
|
|
|
|
FUNC_PARAM_IN = 'i', /* input only */
|
|
|
|
FUNC_PARAM_OUT = 'o', /* output only */
|
2008-07-16 03:30:23 +02:00
|
|
|
FUNC_PARAM_INOUT = 'b', /* both */
|
2008-12-18 19:20:35 +01:00
|
|
|
FUNC_PARAM_VARIADIC = 'v', /* variadic (always input) */
|
2008-07-18 05:32:53 +02:00
|
|
|
FUNC_PARAM_TABLE = 't' /* table function output column */
|
2005-03-29 19:58:51 +02:00
|
|
|
} FunctionParameterMode;
|
|
|
|
|
2004-01-07 00:55:19 +01:00
|
|
|
typedef struct FunctionParameter
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *name; /* parameter name, or NULL if not given */
|
|
|
|
TypeName *argType; /* TypeName for parameter type */
|
2008-12-18 19:20:35 +01:00
|
|
|
FunctionParameterMode mode; /* IN/OUT/etc */
|
|
|
|
Node *defexpr; /* raw default expr, or NULL if not given */
|
2004-01-07 00:55:19 +01:00
|
|
|
} FunctionParameter;
|
|
|
|
|
2005-03-14 01:19:37 +01:00
|
|
|
typedef struct AlterFunctionStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2017-11-30 14:46:13 +01:00
|
|
|
ObjectType objtype;
|
2016-12-28 18:00:00 +01:00
|
|
|
ObjectWithArgs *func; /* name and args of function */
|
2005-03-14 01:19:37 +01:00
|
|
|
List *actions; /* list of DefElem */
|
|
|
|
} AlterFunctionStmt;
|
|
|
|
|
2009-09-23 01:43:43 +02:00
|
|
|
/* ----------------------
|
|
|
|
* DO Statement
|
|
|
|
*
|
|
|
|
* DoStmt is the raw parser output, InlineCodeBlock is the execution-time API
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct DoStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *args; /* List of DefElem nodes */
|
|
|
|
} DoStmt;
|
|
|
|
|
|
|
|
typedef struct InlineCodeBlock
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *source_text; /* source text of anonymous code block */
|
|
|
|
Oid langOid; /* OID of selected language */
|
2010-02-26 03:01:40 +01:00
|
|
|
bool langIsTrusted; /* trusted property of the language */
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
bool atomic; /* atomic execution context */
|
2009-09-23 01:43:43 +02:00
|
|
|
} InlineCodeBlock;
|
|
|
|
|
2017-11-30 14:46:13 +01:00
|
|
|
/* ----------------------
|
|
|
|
* CALL statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CallStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2018-02-21 00:03:31 +01:00
|
|
|
FuncCall *funccall; /* from the parser */
|
|
|
|
FuncExpr *funcexpr; /* transformed */
|
2017-11-30 14:46:13 +01:00
|
|
|
} CallStmt;
|
|
|
|
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 14:30:16 +01:00
|
|
|
typedef struct CallContext
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
bool atomic;
|
|
|
|
} CallContext;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
2002-04-24 04:48:55 +02:00
|
|
|
* Alter Object Rename Statement
|
2002-03-08 05:37:18 +01:00
|
|
|
* ----------------------
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct RenameStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2005-08-01 06:03:59 +02:00
|
|
|
ObjectType renameType; /* OBJECT_TABLE, OBJECT_COLUMN, etc */
|
2011-01-02 05:48:11 +01:00
|
|
|
ObjectType relationType; /* if column name, associated relation type */
|
2003-06-27 16:45:32 +02:00
|
|
|
RangeVar *relation; /* in case it's a table */
|
Remove objname/objargs split for referring to objects
In simpler times, it might have worked to refer to all kinds of objects
by a list of name components and an optional argument list. But this
doesn't work for all objects, which has resulted in a collection of
hacks to place various other nodes types into these fields, which have
to be unpacked at the other end. This makes it also weird to represent
lists of such things in the grammar, because they would have to be lists
of singleton lists, to make the unpacking work consistently. The other
problem is that keeping separate name and args fields makes it awkward
to deal with lists of functions.
Change that by dropping the objargs field and have objname, renamed to
object, be a generic Node, which can then be flexibly assigned and
managed using the normal Node mechanisms. In many cases it will still
be a List of names, in some cases it will be a string Value, for types
it will be the existing Typename, for functions it will now use the
existing ObjectWithArgs node type. Some of the more obscure object
types still use somewhat arbitrary nested lists.
Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-12 18:00:00 +01:00
|
|
|
Node *object; /* in case it's some other object */
|
2003-08-04 02:43:34 +02:00
|
|
|
char *subname; /* name of contained object (column, rule,
|
|
|
|
* trigger, etc) */
|
2002-03-08 05:37:18 +01:00
|
|
|
char *newname; /* the new name */
|
2010-11-23 21:50:17 +01:00
|
|
|
DropBehavior behavior; /* RESTRICT or CASCADE behavior */
|
2012-06-10 21:20:04 +02:00
|
|
|
bool missing_ok; /* skip error if missing? */
|
2002-03-08 05:37:18 +01:00
|
|
|
} RenameStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2016-04-05 23:38:54 +02:00
|
|
|
/* ----------------------
|
|
|
|
* ALTER object DEPENDS ON EXTENSION extname
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct AlterObjectDependsStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
ObjectType objectType; /* OBJECT_FUNCTION, OBJECT_TRIGGER, etc */
|
|
|
|
RangeVar *relation; /* in case a table is involved */
|
Remove objname/objargs split for referring to objects
In simpler times, it might have worked to refer to all kinds of objects
by a list of name components and an optional argument list. But this
doesn't work for all objects, which has resulted in a collection of
hacks to place various other nodes types into these fields, which have
to be unpacked at the other end. This makes it also weird to represent
lists of such things in the grammar, because they would have to be lists
of singleton lists, to make the unpacking work consistently. The other
problem is that keeping separate name and args fields makes it awkward
to deal with lists of functions.
Change that by dropping the objargs field and have objname, renamed to
object, be a generic Node, which can then be flexibly assigned and
managed using the normal Node mechanisms. In many cases it will still
be a List of names, in some cases it will be a string Value, for types
it will be the existing Typename, for functions it will now use the
existing ObjectWithArgs node type. Some of the more obscure object
types still use somewhat arbitrary nested lists.
Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-12 18:00:00 +01:00
|
|
|
Node *object; /* name of the object */
|
2016-04-05 23:38:54 +02:00
|
|
|
Value *extname; /* extension name */
|
|
|
|
} AlterObjectDependsStmt;
|
|
|
|
|
2005-08-01 06:03:59 +02:00
|
|
|
/* ----------------------
|
|
|
|
* ALTER object SET SCHEMA Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct AlterObjectSchemaStmt
|
|
|
|
{
|
2005-10-15 04:49:52 +02:00
|
|
|
NodeTag type;
|
Redesign tablesample method API, and do extensive code review.
The original implementation of TABLESAMPLE modeled the tablesample method
API on index access methods, which wasn't a good choice because, without
specialized DDL commands, there's no way to build an extension that can
implement a TSM. (Raw inserts into system catalogs are not an acceptable
thing to do, because we can't undo them during DROP EXTENSION, nor will
pg_upgrade behave sanely.) Instead adopt an API more like procedural
language handlers or foreign data wrappers, wherein the only SQL-level
support object needed is a single handler function identified by having
a special return type. This lets us get rid of the supporting catalog
altogether, so that no custom DDL support is needed for the feature.
Adjust the API so that it can support non-constant tablesample arguments
(the original coding assumed we could evaluate the argument expressions at
ExecInitSampleScan time, which is undesirable even if it weren't outright
unsafe), and discourage sampling methods from looking at invisible tuples.
Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable
within and across queries, as required by the SQL standard, and deal more
honestly with methods that can't support that requirement.
Make a full code-review pass over the tablesample additions, and fix
assorted bugs, omissions, infelicities, and cosmetic issues (such as
failure to put the added code stanzas in a consistent ordering).
Improve EXPLAIN's output of tablesample plans, too.
Back-patch to 9.5 so that we don't have to support the original API
in production.
2015-07-25 20:39:00 +02:00
|
|
|
ObjectType objectType; /* OBJECT_TABLE, OBJECT_TYPE, etc */
|
2005-08-01 06:03:59 +02:00
|
|
|
RangeVar *relation; /* in case it's a table */
|
Remove objname/objargs split for referring to objects
In simpler times, it might have worked to refer to all kinds of objects
by a list of name components and an optional argument list. But this
doesn't work for all objects, which has resulted in a collection of
hacks to place various other nodes types into these fields, which have
to be unpacked at the other end. This makes it also weird to represent
lists of such things in the grammar, because they would have to be lists
of singleton lists, to make the unpacking work consistently. The other
problem is that keeping separate name and args fields makes it awkward
to deal with lists of functions.
Change that by dropping the objargs field and have objname, renamed to
object, be a generic Node, which can then be flexibly assigned and
managed using the normal Node mechanisms. In many cases it will still
be a List of names, in some cases it will be a string Value, for types
it will be the existing Typename, for functions it will now use the
existing ObjectWithArgs node type. Some of the more obscure object
types still use somewhat arbitrary nested lists.
Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-12 18:00:00 +01:00
|
|
|
Node *object; /* in case it's some other object */
|
2005-10-15 04:49:52 +02:00
|
|
|
char *newschema; /* the new schema */
|
2012-06-10 21:20:04 +02:00
|
|
|
bool missing_ok; /* skip error if missing? */
|
2005-08-01 06:03:59 +02:00
|
|
|
} AlterObjectSchemaStmt;
|
|
|
|
|
2004-06-25 23:55:59 +02:00
|
|
|
/* ----------------------
|
2004-08-29 07:07:03 +02:00
|
|
|
* Alter Object Owner Statement
|
2004-06-25 23:55:59 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct AlterOwnerStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
Redesign tablesample method API, and do extensive code review.
The original implementation of TABLESAMPLE modeled the tablesample method
API on index access methods, which wasn't a good choice because, without
specialized DDL commands, there's no way to build an extension that can
implement a TSM. (Raw inserts into system catalogs are not an acceptable
thing to do, because we can't undo them during DROP EXTENSION, nor will
pg_upgrade behave sanely.) Instead adopt an API more like procedural
language handlers or foreign data wrappers, wherein the only SQL-level
support object needed is a single handler function identified by having
a special return type. This lets us get rid of the supporting catalog
altogether, so that no custom DDL support is needed for the feature.
Adjust the API so that it can support non-constant tablesample arguments
(the original coding assumed we could evaluate the argument expressions at
ExecInitSampleScan time, which is undesirable even if it weren't outright
unsafe), and discourage sampling methods from looking at invisible tuples.
Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable
within and across queries, as required by the SQL standard, and deal more
honestly with methods that can't support that requirement.
Make a full code-review pass over the tablesample additions, and fix
assorted bugs, omissions, infelicities, and cosmetic issues (such as
failure to put the added code stanzas in a consistent ordering).
Improve EXPLAIN's output of tablesample plans, too.
Back-patch to 9.5 so that we don't have to support the original API
in production.
2015-07-25 20:39:00 +02:00
|
|
|
ObjectType objectType; /* OBJECT_TABLE, OBJECT_TYPE, etc */
|
2004-06-25 23:55:59 +02:00
|
|
|
RangeVar *relation; /* in case it's a table */
|
Remove objname/objargs split for referring to objects
In simpler times, it might have worked to refer to all kinds of objects
by a list of name components and an optional argument list. But this
doesn't work for all objects, which has resulted in a collection of
hacks to place various other nodes types into these fields, which have
to be unpacked at the other end. This makes it also weird to represent
lists of such things in the grammar, because they would have to be lists
of singleton lists, to make the unpacking work consistently. The other
problem is that keeping separate name and args fields makes it awkward
to deal with lists of functions.
Change that by dropping the objargs field and have objname, renamed to
object, be a generic Node, which can then be flexibly assigned and
managed using the normal Node mechanisms. In many cases it will still
be a List of names, in some cases it will be a string Value, for types
it will be the existing Typename, for functions it will now use the
existing ObjectWithArgs node type. Some of the more obscure object
types still use somewhat arbitrary nested lists.
Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-12 18:00:00 +01:00
|
|
|
Node *object; /* in case it's some other object */
|
2016-12-28 18:00:00 +01:00
|
|
|
RoleSpec *newowner; /* the new owner */
|
2004-06-25 23:55:59 +02:00
|
|
|
} AlterOwnerStmt;
|
|
|
|
|
|
|
|
|
2015-07-14 17:17:55 +02:00
|
|
|
/* ----------------------
|
|
|
|
* Alter Operator Set Restrict, Join
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct AlterOperatorStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2016-12-28 18:00:00 +01:00
|
|
|
ObjectWithArgs *opername; /* operator name and argument types */
|
2015-07-14 17:17:55 +02:00
|
|
|
List *options; /* List of DefElem nodes */
|
|
|
|
} AlterOperatorStmt;
|
|
|
|
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Create Rule Statement
|
|
|
|
* ----------------------
|
2000-01-17 01:14:49 +01:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct RuleStmt
|
2000-01-17 01:14:49 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2002-03-21 17:02:16 +01:00
|
|
|
RangeVar *relation; /* relation the rule is for */
|
2002-03-08 05:37:18 +01:00
|
|
|
char *rulename; /* name of the rule */
|
|
|
|
Node *whereClause; /* qualifications */
|
2002-03-21 17:02:16 +01:00
|
|
|
CmdType event; /* SELECT, INSERT, etc */
|
2002-03-08 05:37:18 +01:00
|
|
|
bool instead; /* is a 'do instead'? */
|
|
|
|
List *actions; /* the action statements */
|
2002-09-02 04:13:02 +02:00
|
|
|
bool replace; /* OR REPLACE */
|
2002-03-08 05:37:18 +01:00
|
|
|
} RuleStmt;
|
2000-01-17 01:14:49 +01:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Notify Statement
|
|
|
|
* ----------------------
|
1998-12-04 16:34:49 +01:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct NotifyStmt
|
1998-12-04 16:34:49 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2008-09-01 22:42:46 +02:00
|
|
|
char *conditionname; /* condition name to notify */
|
2010-02-16 23:34:57 +01:00
|
|
|
char *payload; /* the payload string, or NULL if none */
|
2002-03-08 05:37:18 +01:00
|
|
|
} NotifyStmt;
|
1998-12-04 16:34:49 +01:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Listen Statement
|
|
|
|
* ----------------------
|
1998-12-04 16:34:49 +01:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct ListenStmt
|
1998-12-04 16:34:49 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2008-09-01 22:42:46 +02:00
|
|
|
char *conditionname; /* condition name to listen on */
|
2002-03-08 05:37:18 +01:00
|
|
|
} ListenStmt;
|
1998-12-04 16:34:49 +01:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Unlisten Statement
|
|
|
|
* ----------------------
|
2001-06-20 00:39:12 +02:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct UnlistenStmt
|
2001-06-20 00:39:12 +02:00
|
|
|
{
|
2002-03-08 05:37:18 +01:00
|
|
|
NodeTag type;
|
2008-09-01 22:42:46 +02:00
|
|
|
char *conditionname; /* name to unlisten on, or NULL for all */
|
2002-03-08 05:37:18 +01:00
|
|
|
} UnlistenStmt;
|
2001-06-20 00:39:12 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
2002-08-04 06:31:44 +02:00
|
|
|
* {Begin|Commit|Rollback} Transaction Statement
|
2002-03-08 05:37:18 +01:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2003-02-10 05:44:47 +01:00
|
|
|
typedef enum TransactionStmtKind
|
|
|
|
{
|
|
|
|
TRANS_STMT_BEGIN,
|
|
|
|
TRANS_STMT_START, /* semantically identical to BEGIN */
|
|
|
|
TRANS_STMT_COMMIT,
|
2004-07-27 07:11:48 +02:00
|
|
|
TRANS_STMT_ROLLBACK,
|
|
|
|
TRANS_STMT_SAVEPOINT,
|
|
|
|
TRANS_STMT_RELEASE,
|
2005-06-18 00:32:51 +02:00
|
|
|
TRANS_STMT_ROLLBACK_TO,
|
|
|
|
TRANS_STMT_PREPARE,
|
|
|
|
TRANS_STMT_COMMIT_PREPARED,
|
|
|
|
TRANS_STMT_ROLLBACK_PREPARED
|
2003-08-08 23:42:59 +02:00
|
|
|
} TransactionStmtKind;
|
2003-02-10 05:44:47 +01:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct TransactionStmt
|
2001-06-20 00:39:12 +02:00
|
|
|
{
|
2001-10-25 07:50:21 +02:00
|
|
|
NodeTag type;
|
2003-02-10 05:44:47 +01:00
|
|
|
TransactionStmtKind kind; /* see above */
|
2018-02-17 02:57:06 +01:00
|
|
|
List *options; /* for BEGIN/START commands */
|
2018-04-26 20:47:16 +02:00
|
|
|
char *savepoint_name; /* for savepoint commands */
|
2005-10-15 04:49:52 +02:00
|
|
|
char *gid; /* for two-phase-commit related commands */
|
2019-03-24 10:33:14 +01:00
|
|
|
bool chain; /* AND CHAIN option */
|
2002-03-08 05:37:18 +01:00
|
|
|
} TransactionStmt;
|
2001-06-20 00:39:12 +02:00
|
|
|
|
2002-08-15 18:36:08 +02:00
|
|
|
/* ----------------------
|
|
|
|
* Create Type Statement, composite types
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CompositeTypeStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
RangeVar *typevar; /* the composite type to be created */
|
|
|
|
List *coldeflist; /* list of ColumnDef nodes */
|
|
|
|
} CompositeTypeStmt;
|
|
|
|
|
2007-04-02 05:49:42 +02:00
|
|
|
/* ----------------------
|
|
|
|
* Create Type Statement, enum types
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CreateEnumStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2009-07-16 08:33:46 +02:00
|
|
|
List *typeName; /* qualified name (list of Value strings) */
|
2007-04-02 05:49:42 +02:00
|
|
|
List *vals; /* enum values (list of Value strings) */
|
2007-11-15 23:25:18 +01:00
|
|
|
} CreateEnumStmt;
|
2007-04-02 05:49:42 +02:00
|
|
|
|
2010-10-25 05:04:37 +02:00
|
|
|
/* ----------------------
|
2011-11-21 05:50:27 +01:00
|
|
|
* Create Type Statement, range types
|
2010-10-25 05:04:37 +02:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2011-11-21 05:50:27 +01:00
|
|
|
typedef struct CreateRangeStmt
|
2010-10-25 05:04:37 +02:00
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *typeName; /* qualified name (list of Value strings) */
|
2011-11-21 05:50:27 +01:00
|
|
|
List *params; /* range parameters (list of DefElem) */
|
|
|
|
} CreateRangeStmt;
|
2002-08-15 18:36:08 +02:00
|
|
|
|
2011-11-03 12:16:28 +01:00
|
|
|
/* ----------------------
|
2011-11-21 05:50:27 +01:00
|
|
|
* Alter Type Statement, enum types
|
2011-11-03 12:16:28 +01:00
|
|
|
* ----------------------
|
|
|
|
*/
|
2011-11-21 05:50:27 +01:00
|
|
|
typedef struct AlterEnumStmt
|
2011-11-03 12:16:28 +01:00
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *typeName; /* qualified name (list of Value strings) */
|
2016-09-07 22:11:56 +02:00
|
|
|
char *oldVal; /* old enum value's name, if renaming */
|
2011-11-21 05:50:27 +01:00
|
|
|
char *newVal; /* new enum value's name */
|
|
|
|
char *newValNeighbor; /* neighboring enum value, if specified */
|
|
|
|
bool newValIsAfter; /* place new enum value after neighbor? */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
bool skipIfNewValExists; /* no error if new already exists? */
|
2011-11-21 05:50:27 +01:00
|
|
|
} AlterEnumStmt;
|
2011-11-03 12:16:28 +01:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Create View Statement
|
|
|
|
* ----------------------
|
2001-06-20 00:39:12 +02:00
|
|
|
*/
|
2013-07-18 23:10:16 +02:00
|
|
|
typedef enum ViewCheckOption
|
|
|
|
{
|
|
|
|
NO_CHECK_OPTION,
|
|
|
|
LOCAL_CHECK_OPTION,
|
|
|
|
CASCADED_CHECK_OPTION
|
|
|
|
} ViewCheckOption;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct ViewStmt
|
2001-06-20 00:39:12 +02:00
|
|
|
{
|
2002-03-08 05:37:18 +01:00
|
|
|
NodeTag type;
|
2002-03-21 17:02:16 +01:00
|
|
|
RangeVar *view; /* the view to be created */
|
2002-03-08 05:37:18 +01:00
|
|
|
List *aliases; /* target column names */
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
|
|
|
Node *query; /* the SELECT query (as a raw parse tree) */
|
2002-09-02 04:13:02 +02:00
|
|
|
bool replace; /* replace an existing view? */
|
2011-12-22 22:15:57 +01:00
|
|
|
List *options; /* options from WITH clause */
|
2014-05-06 18:12:18 +02:00
|
|
|
ViewCheckOption withCheckOption; /* WITH CHECK OPTION */
|
2002-03-08 05:37:18 +01:00
|
|
|
} ViewStmt;
|
2001-06-20 00:39:12 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Load Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct LoadStmt
|
2001-06-20 00:39:12 +02:00
|
|
|
{
|
2001-10-25 07:50:21 +02:00
|
|
|
NodeTag type;
|
2002-03-08 05:37:18 +01:00
|
|
|
char *filename; /* file to load */
|
|
|
|
} LoadStmt;
|
2001-06-20 00:39:12 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Createdb Statement
|
|
|
|
* ----------------------
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct CreatedbStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-08 05:37:18 +01:00
|
|
|
char *dbname; /* name of database to create */
|
2002-06-18 19:27:58 +02:00
|
|
|
List *options; /* List of DefElem nodes */
|
2002-03-08 05:37:18 +01:00
|
|
|
} CreatedbStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Alter Database
|
|
|
|
* ----------------------
|
1997-09-07 07:04:48 +02:00
|
|
|
*/
|
2005-07-31 19:19:22 +02:00
|
|
|
typedef struct AlterDatabaseStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *dbname; /* name of database to alter */
|
|
|
|
List *options; /* List of DefElem nodes */
|
|
|
|
} AlterDatabaseStmt;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct AlterDatabaseSetStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2007-09-03 20:46:30 +02:00
|
|
|
char *dbname; /* database name */
|
|
|
|
VariableSetStmt *setstmt; /* SET or RESET subcommand */
|
2002-03-08 05:37:18 +01:00
|
|
|
} AlterDatabaseSetStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Dropdb Statement
|
|
|
|
* ----------------------
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct DropdbStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-08 05:37:18 +01:00
|
|
|
char *dbname; /* database to drop */
|
2005-11-22 19:17:34 +01:00
|
|
|
bool missing_ok; /* skip error if db is missing? */
|
2019-11-12 06:36:13 +01:00
|
|
|
List *options; /* currently only FORCE is supported */
|
2002-03-08 05:37:18 +01:00
|
|
|
} DropdbStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2013-12-18 15:42:44 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Alter System Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct AlterSystemStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
VariableSetStmt *setstmt; /* SET subcommand */
|
2014-05-06 18:12:18 +02:00
|
|
|
} AlterSystemStmt;
|
2013-12-18 15:42:44 +01:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Cluster Statement (support pbrown's cluster index implementation)
|
|
|
|
* ----------------------
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2018-07-24 04:37:32 +02:00
|
|
|
typedef enum ClusterOption
|
|
|
|
{
|
2018-07-29 15:00:42 +02:00
|
|
|
CLUOPT_RECHECK = 1 << 0, /* recheck relation state */
|
|
|
|
CLUOPT_VERBOSE = 1 << 1 /* print progress info */
|
2018-07-24 04:37:32 +02:00
|
|
|
} ClusterOption;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct ClusterStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-11-15 04:09:39 +01:00
|
|
|
RangeVar *relation; /* relation being indexed, or NULL if all */
|
2002-03-08 05:37:18 +01:00
|
|
|
char *indexname; /* original index defined */
|
2018-07-24 04:37:32 +02:00
|
|
|
int options; /* OR of ClusterOption flags */
|
2002-03-08 05:37:18 +01:00
|
|
|
} ClusterStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Vacuum and Analyze Statements
|
1999-07-18 05:45:01 +02:00
|
|
|
*
|
2002-03-08 05:37:18 +01:00
|
|
|
* Even though these are nominally two statements, it's convenient to use
|
2019-03-18 20:14:52 +01:00
|
|
|
* just one node type for both.
|
2002-03-08 05:37:18 +01:00
|
|
|
* ----------------------
|
1997-09-07 07:04:48 +02:00
|
|
|
*/
|
2019-03-18 20:14:52 +01:00
|
|
|
typedef struct VacuumStmt
|
2009-11-16 22:32:07 +01:00
|
|
|
{
|
2019-03-18 20:14:52 +01:00
|
|
|
NodeTag type;
|
2019-05-22 18:55:34 +02:00
|
|
|
List *options; /* list of DefElem nodes */
|
2019-03-18 20:14:52 +01:00
|
|
|
List *rels; /* list of VacuumRelation, or NIL for all */
|
|
|
|
bool is_vacuumcmd; /* true for VACUUM, false for ANALYZE */
|
|
|
|
} VacuumStmt;
|
2009-11-16 22:32:07 +01:00
|
|
|
|
2017-10-04 00:53:44 +02:00
|
|
|
/*
|
|
|
|
* Info about a single target table of VACUUM/ANALYZE.
|
|
|
|
*
|
|
|
|
* If the OID field is set, it always identifies the table to process.
|
|
|
|
* Then the relation field can be NULL; if it isn't, it's used only to report
|
|
|
|
* failure to open/lock the relation.
|
|
|
|
*/
|
|
|
|
typedef struct VacuumRelation
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
RangeVar *relation; /* table name to process, or NULL */
|
|
|
|
Oid oid; /* table's OID; InvalidOid if not looked up */
|
|
|
|
List *va_cols; /* list of column names, or NIL for all */
|
|
|
|
} VacuumRelation;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Explain Statement
|
2010-01-15 23:36:35 +01:00
|
|
|
*
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
|
|
|
* The "query" field is initially a raw parse tree, and is converted to a
|
|
|
|
* Query node during parse analysis. Note that rewriting and planning
|
|
|
|
* of the query are always postponed until execution.
|
2002-03-08 05:37:18 +01:00
|
|
|
* ----------------------
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct ExplainStmt
|
1998-08-05 06:49:19 +02:00
|
|
|
{
|
|
|
|
NodeTag type;
|
2010-01-15 23:36:35 +01:00
|
|
|
Node *query; /* the query (see comments above) */
|
2009-07-27 01:34:18 +02:00
|
|
|
List *options; /* list of DefElem nodes */
|
2002-03-08 05:37:18 +01:00
|
|
|
} ExplainStmt;
|
1998-08-05 06:49:19 +02:00
|
|
|
|
Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements. The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.
In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs. Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.
Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.
Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.
Andres Freund and Tom Lane
2012-03-20 02:37:19 +01:00
|
|
|
/* ----------------------
|
|
|
|
* CREATE TABLE AS Statement (a/k/a SELECT INTO)
|
|
|
|
*
|
|
|
|
* A query written as CREATE TABLE AS will produce this node type natively.
|
|
|
|
* A query written as SELECT ... INTO will be transformed to this form during
|
|
|
|
* parse analysis.
|
2013-03-04 01:23:31 +01:00
|
|
|
* A query written as CREATE MATERIALIZED view will produce this node type,
|
|
|
|
* during parse analysis, since it needs all the same data.
|
Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements. The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.
In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs. Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.
Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.
Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.
Andres Freund and Tom Lane
2012-03-20 02:37:19 +01:00
|
|
|
*
|
|
|
|
* The "query" field is handled similarly to EXPLAIN, though note that it
|
|
|
|
* can be a SELECT or an EXECUTE, but not other DML statements.
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CreateTableAsStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
Node *query; /* the query (see comments above) */
|
|
|
|
IntoClause *into; /* destination table */
|
2013-04-27 23:48:57 +02:00
|
|
|
ObjectType relkind; /* OBJECT_TABLE or OBJECT_MATVIEW */
|
2012-06-10 21:20:04 +02:00
|
|
|
bool is_select_into; /* it was written as SELECT INTO */
|
2014-12-13 19:56:09 +01:00
|
|
|
bool if_not_exists; /* just do nothing if it already exists? */
|
Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements. The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.
In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs. Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.
Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.
Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.
Andres Freund and Tom Lane
2012-03-20 02:37:19 +01:00
|
|
|
} CreateTableAsStmt;
|
|
|
|
|
2013-03-04 01:23:31 +01:00
|
|
|
/* ----------------------
|
|
|
|
* REFRESH MATERIALIZED VIEW Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct RefreshMatViewStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2013-07-16 19:55:44 +02:00
|
|
|
bool concurrent; /* allow concurrent access? */
|
2013-03-04 01:23:31 +01:00
|
|
|
bool skipData; /* true for WITH NO DATA */
|
|
|
|
RangeVar *relation; /* relation to insert into */
|
|
|
|
} RefreshMatViewStmt;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* Checkpoint Statement
|
|
|
|
* ----------------------
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct CheckPointStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-08 05:37:18 +01:00
|
|
|
} CheckPointStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2007-04-26 18:13:15 +02:00
|
|
|
/* ----------------------
|
|
|
|
* Discard Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef enum DiscardMode
|
|
|
|
{
|
|
|
|
DISCARD_ALL,
|
|
|
|
DISCARD_PLANS,
|
2013-10-03 22:17:18 +02:00
|
|
|
DISCARD_SEQUENCES,
|
2007-04-26 18:13:15 +02:00
|
|
|
DISCARD_TEMP
|
2007-11-15 23:25:18 +01:00
|
|
|
} DiscardMode;
|
2007-04-26 18:13:15 +02:00
|
|
|
|
|
|
|
typedef struct DiscardStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2007-11-15 22:14:46 +01:00
|
|
|
DiscardMode target;
|
2007-11-15 23:25:18 +01:00
|
|
|
} DiscardStmt;
|
2007-04-26 18:13:15 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* LOCK Statement
|
|
|
|
* ----------------------
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct LockStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2002-03-21 17:02:16 +01:00
|
|
|
List *relations; /* relations to lock */
|
2002-03-08 05:37:18 +01:00
|
|
|
int mode; /* lock mode */
|
2004-08-29 07:07:03 +02:00
|
|
|
bool nowait; /* no wait mode */
|
2002-03-08 05:37:18 +01:00
|
|
|
} LockStmt;
|
1996-08-28 03:59:28 +02:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* SET CONSTRAINTS Statement
|
|
|
|
* ----------------------
|
1997-09-07 07:04:48 +02:00
|
|
|
*/
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct ConstraintsSetStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2006-04-27 02:33:46 +02:00
|
|
|
List *constraints; /* List of names as RangeVars */
|
2002-03-08 05:37:18 +01:00
|
|
|
bool deferred;
|
|
|
|
} ConstraintsSetStmt;
|
2001-03-22 05:01:46 +01:00
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
/* ----------------------
|
|
|
|
* REINDEX Statement
|
|
|
|
* ----------------------
|
1996-08-28 03:59:28 +02:00
|
|
|
*/
|
2015-05-15 13:09:57 +02:00
|
|
|
|
|
|
|
/* Reindex options */
|
Reconsider the representation of join alias Vars.
The core idea of this patch is to make the parser generate join alias
Vars (that is, ones with varno pointing to a JOIN RTE) only when the
alias Var is actually different from any raw join input, that is a type
coercion and/or COALESCE is necessary to generate the join output value.
Otherwise just generate varno/varattno pointing to the relevant join
input column.
In effect, this means that the planner's flatten_join_alias_vars()
transformation is already done in the parser, for all cases except
(a) columns that are merged by JOIN USING and are transformed in the
process, and (b) whole-row join Vars. In principle that would allow
us to skip doing flatten_join_alias_vars() in many more queries than
we do now, but we don't have quite enough infrastructure to know that
we can do so --- in particular there's no cheap way to know whether
there are any whole-row join Vars. I'm not sure if it's worth the
trouble to add a Query-level flag for that, and in any case it seems
like fit material for a separate patch. But even without skipping the
work entirely, this should make flatten_join_alias_vars() faster,
particularly where there are nested joins that it previously had to
flatten recursively.
An essential part of this change is to replace Var nodes'
varnoold/varoattno fields with varnosyn/varattnosyn, which have
considerably more tightly-defined meanings than the old fields: when
they differ from varno/varattno, they identify the Var's position in
an aliased JOIN RTE, and the join alias is what ruleutils.c should
print for the Var. This is necessary because the varno change
destroyed ruleutils.c's ability to find the JOIN RTE from the Var's
varno.
Another way in which this change broke ruleutils.c is that it's no
longer feasible to determine, from a JOIN RTE's joinaliasvars list,
which join columns correspond to which columns of the join's immediate
input relations. (If those are sub-joins, the joinaliasvars entries
may point to columns of their base relations, not the sub-joins.)
But that was a horrid mess requiring a lot of fragile assumptions
already, so let's just bite the bullet and add some more JOIN RTE
fields to make it more straightforward to figure that out. I added
two integer-List fields containing the relevant column numbers from
the left and right input rels, plus a count of how many merged columns
there are.
This patch depends on the ParseNamespaceColumn infrastructure that
I added in commit 5815696bc. The biggest bit of code change is
restructuring transformFromClauseItem's handling of JOINs so that
the ParseNamespaceColumn data is propagated upward correctly.
Other than that and the ruleutils fixes, everything pretty much
just works, though some processing is now inessential. I grabbed
two pieces of low-hanging fruit in that line:
1. In find_expr_references, we don't need to recurse into join alias
Vars anymore. There aren't any except for references to merged USING
columns, which are more properly handled when we scan the join's RTE.
This change actually fixes an edge-case issue: we will now record a
dependency on any type-coercion function present in a USING column's
joinaliasvar, even if that join column has no references in the query
text. The odds of the missing dependency causing a problem seem quite
small: you'd have to posit somebody dropping an implicit cast between
two data types, without removing the types themselves, and then having
a stored rule containing a whole-row Var for a join whose USING merge
depends on that cast. So I don't feel a great need to change this in
the back branches. But in theory this way is more correct.
2. markRTEForSelectPriv and markTargetListOrigin don't need to recurse
into join alias Vars either, because the cases they care about don't
apply to alias Vars for USING columns that are semantically distinct
from the underlying columns. This removes the only case in which
markVarForSelectPriv could be called with NULL for the RTE, so adjust
the comments to describe that hack as being strictly internal to
markRTEForSelectPriv.
catversion bump required due to changes in stored rules.
Discussion: https://postgr.es/m/7115.1577986646@sss.pgh.pa.us
2020-01-09 17:56:59 +01:00
|
|
|
#define REINDEXOPT_VERBOSE (1 << 0) /* print progress info */
|
|
|
|
#define REINDEXOPT_REPORT_PROGRESS (1 << 1) /* report pgstat progress */
|
2015-05-15 13:09:57 +02:00
|
|
|
|
2014-12-08 16:28:00 +01:00
|
|
|
typedef enum ReindexObjectType
|
|
|
|
{
|
2015-05-24 03:35:49 +02:00
|
|
|
REINDEX_OBJECT_INDEX, /* index */
|
|
|
|
REINDEX_OBJECT_TABLE, /* table or materialized view */
|
|
|
|
REINDEX_OBJECT_SCHEMA, /* schema */
|
|
|
|
REINDEX_OBJECT_SYSTEM, /* system catalogs */
|
|
|
|
REINDEX_OBJECT_DATABASE /* database */
|
2014-12-08 16:28:00 +01:00
|
|
|
} ReindexObjectType;
|
|
|
|
|
2002-03-08 05:37:18 +01:00
|
|
|
typedef struct ReindexStmt
|
1997-09-07 07:04:48 +02:00
|
|
|
{
|
1997-09-08 04:41:22 +02:00
|
|
|
NodeTag type;
|
2015-05-24 03:35:49 +02:00
|
|
|
ReindexObjectType kind; /* REINDEX_OBJECT_INDEX, REINDEX_OBJECT_TABLE,
|
|
|
|
* etc. */
|
2002-03-21 17:02:16 +01:00
|
|
|
RangeVar *relation; /* Table or index to reindex */
|
|
|
|
const char *name; /* name of database to reindex */
|
2015-05-15 13:09:57 +02:00
|
|
|
int options; /* Reindex options flags */
|
2019-03-29 08:25:20 +01:00
|
|
|
bool concurrent; /* reindex concurrently? */
|
2002-03-08 05:37:18 +01:00
|
|
|
} ReindexStmt;
|
2001-10-28 07:26:15 +01:00
|
|
|
|
2002-07-11 09:39:28 +02:00
|
|
|
/* ----------------------
|
|
|
|
* CREATE CONVERSION Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CreateConversionStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2002-09-04 22:31:48 +02:00
|
|
|
List *conversion_name; /* Name of the conversion */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
char *for_encoding_name; /* source encoding name */
|
|
|
|
char *to_encoding_name; /* destination encoding name */
|
2002-09-04 22:31:48 +02:00
|
|
|
List *func_name; /* qualified conversion function name */
|
|
|
|
bool def; /* is this a default conversion? */
|
2002-07-11 09:39:28 +02:00
|
|
|
} CreateConversionStmt;
|
|
|
|
|
2002-07-19 01:11:32 +02:00
|
|
|
/* ----------------------
|
|
|
|
* CREATE CAST Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CreateCastStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
TypeName *sourcetype;
|
|
|
|
TypeName *targettype;
|
2016-12-28 18:00:00 +01:00
|
|
|
ObjectWithArgs *func;
|
Extend pg_cast castimplicit column to a three-way value; this allows us
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
2002-09-18 23:35:25 +02:00
|
|
|
CoercionContext context;
|
2008-10-31 09:39:22 +01:00
|
|
|
bool inout;
|
2002-07-19 01:11:32 +02:00
|
|
|
} CreateCastStmt;
|
|
|
|
|
2015-04-26 16:33:14 +02:00
|
|
|
/* ----------------------
|
|
|
|
* CREATE TRANSFORM Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct CreateTransformStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
bool replace;
|
|
|
|
TypeName *type_name;
|
|
|
|
char *lang;
|
2016-12-28 18:00:00 +01:00
|
|
|
ObjectWithArgs *fromsql;
|
|
|
|
ObjectWithArgs *tosql;
|
2015-04-26 16:33:14 +02:00
|
|
|
} CreateTransformStmt;
|
|
|
|
|
2002-08-27 06:55:12 +02:00
|
|
|
/* ----------------------
|
|
|
|
* PREPARE Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct PrepareStmt
|
|
|
|
{
|
2002-09-04 22:31:48 +02:00
|
|
|
NodeTag type;
|
|
|
|
char *name; /* Name of plan, arbitrary */
|
2007-03-13 01:33:44 +01:00
|
|
|
List *argtypes; /* Types of parameters (List of TypeName) */
|
|
|
|
Node *query; /* The query itself (as a raw parsetree) */
|
2002-08-27 06:55:12 +02:00
|
|
|
} PrepareStmt;
|
|
|
|
|
|
|
|
|
|
|
|
/* ----------------------
|
|
|
|
* EXECUTE Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef struct ExecuteStmt
|
|
|
|
{
|
2006-10-04 02:30:14 +02:00
|
|
|
NodeTag type;
|
|
|
|
char *name; /* The name of the plan to execute */
|
|
|
|
List *params; /* Values to assign to parameters */
|
2002-08-27 06:55:12 +02:00
|
|
|
} ExecuteStmt;
|
|
|
|
|
|
|
|
|
|
|
|
/* ----------------------
|
|
|
|
* DEALLOCATE Statement
|
|
|
|
* ----------------------
|
|
|
|
*/
|
|
|
|
typedef struct DeallocateStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
char *name; /* The name of the plan to remove */
|
2007-11-15 22:14:46 +01:00
|
|
|
/* NULL means DEALLOCATE ALL */
|
2002-08-27 06:55:12 +02:00
|
|
|
} DeallocateStmt;
|
|
|
|
|
2005-11-21 13:49:33 +01:00
|
|
|
/*
|
2005-11-22 19:17:34 +01:00
|
|
|
* DROP OWNED statement
|
2005-11-21 13:49:33 +01:00
|
|
|
*/
|
|
|
|
typedef struct DropOwnedStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *roles;
|
|
|
|
DropBehavior behavior;
|
2006-02-19 01:04:28 +01:00
|
|
|
} DropOwnedStmt;
|
2005-11-21 13:49:33 +01:00
|
|
|
|
|
|
|
/*
|
2005-11-22 19:17:34 +01:00
|
|
|
* REASSIGN OWNED statement
|
2005-11-21 13:49:33 +01:00
|
|
|
*/
|
|
|
|
typedef struct ReassignOwnedStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *roles;
|
2016-12-28 18:00:00 +01:00
|
|
|
RoleSpec *newrole;
|
2006-02-19 01:04:28 +01:00
|
|
|
} ReassignOwnedStmt;
|
2005-11-21 13:49:33 +01:00
|
|
|
|
2007-08-21 03:11:32 +02:00
|
|
|
/*
|
|
|
|
* TS Dictionary stmts: DefineStmt, RenameStmt and DropStmt are default
|
|
|
|
*/
|
|
|
|
typedef struct AlterTSDictionaryStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
|
|
|
List *dictname; /* qualified name (list of Value strings) */
|
|
|
|
List *options; /* List of DefElem nodes */
|
2007-11-15 23:25:18 +01:00
|
|
|
} AlterTSDictionaryStmt;
|
2007-08-21 03:11:32 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* TS Configuration stmts: DefineStmt, RenameStmt and DropStmt are default
|
|
|
|
*/
|
Allow on-the-fly capture of DDL event details
This feature lets user code inspect and take action on DDL events.
Whenever a ddl_command_end event trigger is installed, DDL actions
executed are saved to a list which can be inspected during execution of
a function attached to ddl_command_end.
The set-returning function pg_event_trigger_ddl_commands can be used to
list actions so captured; it returns data about the type of command
executed, as well as the affected object. This is sufficient for many
uses of this feature. For the cases where it is not, we also provide a
"command" column of a new pseudo-type pg_ddl_command, which is a
pointer to a C structure that can be accessed by C code. The struct
contains all the info necessary to completely inspect and even
reconstruct the executed command.
There is no actual deparse code here; that's expected to come later.
What we have is enough infrastructure that the deparsing can be done in
an external extension. The intention is that we will add some deparsing
code in a later release, as an in-core extension.
A new test module is included. It's probably insufficient as is, but it
should be sufficient as a starting point for a more complete and
future-proof approach.
Authors: Álvaro Herrera, with some help from Andres Freund, Ian Barwick,
Abhijit Menon-Sen.
Reviews by Andres Freund, Robert Haas, Amit Kapila, Michael Paquier,
Craig Ringer, David Steele.
Additional input from Chris Browne, Dimitri Fontaine, Stephen Frost,
Petr Jelínek, Tom Lane, Jim Nasby, Steven Singer, Pavel Stěhule.
Based on original work by Dimitri Fontaine, though I didn't use his
code.
Discussion:
https://www.postgresql.org/message-id/m2txrsdzxa.fsf@2ndQuadrant.fr
https://www.postgresql.org/message-id/20131108153322.GU5809@eldon.alvh.no-ip.org
https://www.postgresql.org/message-id/20150215044814.GL3391@alvh.no-ip.org
2015-05-12 00:14:31 +02:00
|
|
|
typedef enum AlterTSConfigType
|
|
|
|
{
|
|
|
|
ALTER_TSCONFIG_ADD_MAPPING,
|
|
|
|
ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN,
|
|
|
|
ALTER_TSCONFIG_REPLACE_DICT,
|
|
|
|
ALTER_TSCONFIG_REPLACE_DICT_FOR_TOKEN,
|
|
|
|
ALTER_TSCONFIG_DROP_MAPPING
|
|
|
|
} AlterTSConfigType;
|
|
|
|
|
2007-08-21 03:11:32 +02:00
|
|
|
typedef struct AlterTSConfigurationStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2015-05-24 03:35:49 +02:00
|
|
|
AlterTSConfigType kind; /* ALTER_TSCONFIG_ADD_MAPPING, etc */
|
2007-08-21 03:11:32 +02:00
|
|
|
List *cfgname; /* qualified name (list of Value strings) */
|
|
|
|
|
|
|
|
/*
|
2007-11-15 22:14:46 +01:00
|
|
|
* dicts will be non-NIL if ADD/ALTER MAPPING was specified. If dicts is
|
|
|
|
* NIL, but tokentype isn't, DROP MAPPING was specified.
|
2007-08-21 03:11:32 +02:00
|
|
|
*/
|
2007-11-15 22:14:46 +01:00
|
|
|
List *tokentype; /* list of Value strings */
|
|
|
|
List *dicts; /* list of list of Value strings */
|
|
|
|
bool override; /* if true - remove old variant */
|
|
|
|
bool replace; /* if true - replace dictionary by another */
|
|
|
|
bool missing_ok; /* for DROP - skip error if missing? */
|
2007-11-15 23:25:18 +01:00
|
|
|
} AlterTSConfigurationStmt;
|
2007-08-21 03:11:32 +02:00
|
|
|
|
2017-01-19 18:00:00 +01:00
|
|
|
|
|
|
|
typedef struct CreatePublicationStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2018-07-09 15:10:44 +02:00
|
|
|
char *pubname; /* Name of the publication */
|
2017-01-19 18:00:00 +01:00
|
|
|
List *options; /* List of DefElem nodes */
|
|
|
|
List *tables; /* Optional list of tables to add */
|
2017-05-17 22:31:56 +02:00
|
|
|
bool for_all_tables; /* Special publication for all tables in db */
|
2017-01-19 18:00:00 +01:00
|
|
|
} CreatePublicationStmt;
|
|
|
|
|
|
|
|
typedef struct AlterPublicationStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2018-07-09 15:10:44 +02:00
|
|
|
char *pubname; /* Name of the publication */
|
2017-01-19 18:00:00 +01:00
|
|
|
|
|
|
|
/* parameters used for ALTER PUBLICATION ... WITH */
|
|
|
|
List *options; /* List of DefElem nodes */
|
|
|
|
|
|
|
|
/* parameters used for ALTER PUBLICATION ... ADD/DROP TABLE */
|
|
|
|
List *tables; /* List of tables to add/drop */
|
2017-05-17 22:31:56 +02:00
|
|
|
bool for_all_tables; /* Special publication for all tables in db */
|
|
|
|
DefElemAction tableAction; /* What action to perform with the tables */
|
2017-01-19 18:00:00 +01:00
|
|
|
} AlterPublicationStmt;
|
|
|
|
|
|
|
|
typedef struct CreateSubscriptionStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2018-07-09 15:10:44 +02:00
|
|
|
char *subname; /* Name of the subscription */
|
2017-01-19 18:00:00 +01:00
|
|
|
char *conninfo; /* Connection string to publisher */
|
|
|
|
List *publication; /* One or more publication to subscribe to */
|
|
|
|
List *options; /* List of DefElem nodes */
|
|
|
|
} CreateSubscriptionStmt;
|
|
|
|
|
2017-03-23 13:36:36 +01:00
|
|
|
typedef enum AlterSubscriptionType
|
|
|
|
{
|
|
|
|
ALTER_SUBSCRIPTION_OPTIONS,
|
|
|
|
ALTER_SUBSCRIPTION_CONNECTION,
|
|
|
|
ALTER_SUBSCRIPTION_PUBLICATION,
|
|
|
|
ALTER_SUBSCRIPTION_REFRESH,
|
|
|
|
ALTER_SUBSCRIPTION_ENABLED
|
|
|
|
} AlterSubscriptionType;
|
|
|
|
|
2017-01-19 18:00:00 +01:00
|
|
|
typedef struct AlterSubscriptionStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2017-05-17 22:31:56 +02:00
|
|
|
AlterSubscriptionType kind; /* ALTER_SUBSCRIPTION_OPTIONS, etc */
|
2018-07-09 15:10:44 +02:00
|
|
|
char *subname; /* Name of the subscription */
|
2017-03-23 13:36:36 +01:00
|
|
|
char *conninfo; /* Connection string to publisher */
|
|
|
|
List *publication; /* One or more publication to subscribe to */
|
2017-01-19 18:00:00 +01:00
|
|
|
List *options; /* List of DefElem nodes */
|
|
|
|
} AlterSubscriptionStmt;
|
|
|
|
|
|
|
|
typedef struct DropSubscriptionStmt
|
|
|
|
{
|
|
|
|
NodeTag type;
|
2018-07-09 15:10:44 +02:00
|
|
|
char *subname; /* Name of the subscription */
|
2017-01-19 18:00:00 +01:00
|
|
|
bool missing_ok; /* Skip error if missing? */
|
2017-05-09 16:20:42 +02:00
|
|
|
DropBehavior behavior; /* RESTRICT or CASCADE behavior */
|
2017-01-19 18:00:00 +01:00
|
|
|
} DropSubscriptionStmt;
|
|
|
|
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
#endif /* PARSENODES_H */
|