postgresql/src/backend/parser/parse_utilcmd.c

3947 lines
115 KiB
C
Raw Normal View History

/*-------------------------------------------------------------------------
*
* parse_utilcmd.c
* Perform parse analysis work for various utility commands
*
* Formerly we did this work during parse_analyze() in analyze.c. However
* that is fairly unsafe in the presence of querytree caching, since any
* database state that we depend on in making the transformations might be
* obsolete by the time the utility command is executed; and utility commands
* have no infrastructure for holding locks or rechecking plan validity.
* Hence these functions are now called at the start of execution of their
* respective utility commands.
*
* NOTE: in general we must avoid scribbling on the passed-in raw parse
* tree, since it might be in a plan cache. The simplest solution is
* a quick copyObject() call before manipulating the query tree.
*
*
* Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
2010-09-20 22:08:53 +02:00
* src/backend/parser/parse_utilcmd.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/amapi.h"
#include "access/htup_details.h"
#include "access/relation.h"
#include "access/reloptions.h"
#include "access/table.h"
#include "catalog/dependency.h"
#include "catalog/heap.h"
#include "catalog/index.h"
#include "catalog/namespace.h"
#include "catalog/pg_am.h"
#include "catalog/pg_collation.h"
#include "catalog/pg_constraint.h"
#include "catalog/pg_opclass.h"
#include "catalog/pg_operator.h"
#include "catalog/pg_statistic_ext.h"
#include "catalog/pg_type.h"
#include "commands/comment.h"
#include "commands/defrem.h"
#include "commands/sequence.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
#include "miscadmin.h"
#include "nodes/makefuncs.h"
#include "nodes/nodeFuncs.h"
#include "optimizer/optimizer.h"
#include "parser/analyze.h"
#include "parser/parse_clause.h"
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
#include "parser/parse_coerce.h"
#include "parser/parse_collate.h"
#include "parser/parse_expr.h"
#include "parser/parse_relation.h"
#include "parser/parse_target.h"
#include "parser/parse_type.h"
#include "parser/parse_utilcmd.h"
#include "parser/parser.h"
#include "rewrite/rewriteManip.h"
#include "utils/acl.h"
#include "utils/builtins.h"
#include "utils/lsyscache.h"
#include "utils/partcache.h"
#include "utils/rel.h"
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
#include "utils/ruleutils.h"
#include "utils/syscache.h"
#include "utils/typcache.h"
/* State shared by transformCreateStmt and its subroutines */
typedef struct
{
ParseState *pstate; /* overall parser state */
const char *stmtType; /* "CREATE [FOREIGN] TABLE" or "ALTER TABLE" */
RangeVar *relation; /* relation to create */
Relation rel; /* opened/locked rel, if ALTER */
List *inhRelations; /* relations to inherit from */
bool isforeign; /* true if CREATE/ALTER FOREIGN TABLE */
bool isalter; /* true if altering existing table */
List *columns; /* ColumnDef items */
List *ckconstraints; /* CHECK constraints */
List *fkconstraints; /* FOREIGN KEY constraints */
List *ixconstraints; /* index-creating constraints */
List *inh_indexes; /* cloned indexes from INCLUDING INDEXES */
List *extstats; /* cloned extended statistics */
List *blist; /* "before list" of things to do before
* creating the table */
List *alist; /* "after list" of things to do after creating
* the table */
IndexStmt *pkey; /* PRIMARY KEY index, if any */
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
bool ispartitioned; /* true if table is partitioned */
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
PartitionBoundSpec *partbound; /* transformed FOR VALUES */
bool ofType; /* true if statement contains OF typename */
} CreateStmtContext;
/* State shared by transformCreateSchemaStmt and its subroutines */
typedef struct
{
const char *stmtType; /* "CREATE SCHEMA" or "ALTER SCHEMA" */
char *schemaname; /* name of schema */
RoleSpec *authrole; /* owner of schema */
List *sequences; /* CREATE SEQUENCE items */
List *tables; /* CREATE TABLE items */
List *views; /* CREATE VIEW items */
List *indexes; /* CREATE INDEX items */
List *triggers; /* CREATE TRIGGER items */
List *grants; /* GRANT items */
} CreateSchemaStmtContext;
static void transformColumnDefinition(CreateStmtContext *cxt,
ColumnDef *column);
static void transformTableConstraint(CreateStmtContext *cxt,
Constraint *constraint);
static void transformTableLikeClause(CreateStmtContext *cxt,
TableLikeClause *table_like_clause);
static void transformOfType(CreateStmtContext *cxt,
2010-02-26 03:01:40 +01:00
TypeName *ofTypename);
static CreateStatsStmt *generateClonedExtStatsStmt(RangeVar *heapRel,
Oid heapRelid, Oid source_statsid);
static List *get_collation(Oid collation, Oid actual_datatype);
static List *get_opclass(Oid opclass, Oid actual_datatype);
static void transformIndexConstraints(CreateStmtContext *cxt);
static IndexStmt *transformIndexConstraint(Constraint *constraint,
2007-11-15 22:14:46 +01:00
CreateStmtContext *cxt);
static void transformExtendedStatistics(CreateStmtContext *cxt);
static void transformFKConstraints(CreateStmtContext *cxt,
bool skipValidation,
bool isAddConstraint);
static void transformCheckConstraints(CreateStmtContext *cxt,
2016-06-10 00:02:36 +02:00
bool skipValidation);
static void transformConstraintAttrs(CreateStmtContext *cxt,
2011-04-10 17:42:00 +02:00
List *constraintList);
static void transformColumnType(CreateStmtContext *cxt, ColumnDef *column);
static void setSchemaName(char *context_schema, char **stmt_schema_name);
static void transformPartitionCmd(CreateStmtContext *cxt, PartitionCmd *cmd);
static List *transformPartitionRangeBounds(ParseState *pstate, List *blist,
Relation parent);
static void validateInfiniteBounds(ParseState *pstate, List *blist);
static Const *transformPartitionBoundValue(ParseState *pstate, Node *con,
const char *colName, Oid colType, int32 colTypmod,
Oid partCollation);
/*
* transformCreateStmt -
* parse analysis for CREATE TABLE
*
* Returns a List of utility commands to be done in sequence. One of these
* will be the transformed CreateStmt, but there may be additional actions
* to be done before and after the actual DefineRelation() call.
*
* SQL allows constraints to be scattered all over, so thumb through
* the columns and collect all constraints into one place.
* If there are any implied indices (e.g. UNIQUE or PRIMARY KEY)
* then expand those into multiple IndexStmt blocks.
* - thomas 1997-12-02
*/
List *
transformCreateStmt(CreateStmt *stmt, const char *queryString)
{
ParseState *pstate;
CreateStmtContext cxt;
List *result;
List *save_alist;
ListCell *elements;
Oid namespaceid;
Oid existing_relid;
ParseCallbackState pcbstate;
bool is_foreign_table = IsA(stmt, CreateForeignTableStmt);
/*
2007-11-15 22:14:46 +01:00
* We must not scribble on the passed-in CreateStmt, so copy it. (This is
* overkill, but easy.)
*/
stmt = copyObject(stmt);
/* Set up pstate */
pstate = make_parsestate(NULL);
pstate->p_sourcetext = queryString;
/*
* Look up the creation namespace. This also checks permissions on the
* target namespace, locks it against concurrent drops, checks for a
* preexisting relation in that namespace with the same name, and updates
2015-02-03 08:48:45 +01:00
* stmt->relation->relpersistence if the selected namespace is temporary.
*/
setup_parser_errposition_callback(&pcbstate, pstate,
stmt->relation->location);
namespaceid =
RangeVarGetAndCheckCreationNamespace(stmt->relation, NoLock,
&existing_relid);
cancel_parser_errposition_callback(&pcbstate);
/*
* If the relation already exists and the user specified "IF NOT EXISTS",
* bail out with a NOTICE.
*/
if (stmt->if_not_exists && OidIsValid(existing_relid))
{
ereport(NOTICE,
(errcode(ERRCODE_DUPLICATE_TABLE),
errmsg("relation \"%s\" already exists, skipping",
stmt->relation->relname)));
return NIL;
}
Fix a couple of misbehaviors rooted in the fact that the default creation namespace isn't necessarily first in the search path (there could be implicit schemas ahead of it). Examples are test=# set search_path TO s1; test=# create view pg_timezone_names as select * from pg_timezone_names(); ERROR: "pg_timezone_names" is already a view test=# create table pg_class (f1 int primary key); ERROR: permission denied: "pg_class" is a system catalog You'd expect these commands to create the requested objects in s1, since names beginning with pg_ aren't supposed to be reserved anymore. What is happening is that we create the requested base table and then execute additional commands (here, CREATE RULE or CREATE INDEX), and that code is passed the same RangeVar that was in the original command. Since that RangeVar has schemaname = NULL, the secondary commands think they should do a path search, and that means they find system catalogs that are implicitly in front of s1 in the search path. This is perilously close to being a security hole: if the secondary command failed to apply a permission check then it'd be possible for unprivileged users to make schema modifications to system catalogs. But as far as I can find, there is no code path in which a check doesn't occur. Which makes it just a weird corner-case bug for people who are silly enough to want to name their tables the same as a system catalog. The relevant code has changed quite a bit since 8.2, which means this patch wouldn't work as-is in the back branches. Since it's a corner case no one has reported from the field, I'm not going to bother trying to back-patch.
2007-08-27 05:36:08 +02:00
/*
* If the target relation name isn't schema-qualified, make it so. This
* prevents some corner cases in which added-on rewritten commands might
2007-11-15 22:14:46 +01:00
* think they should apply to other relations that have the same name and
* are earlier in the search path. But a local temp table is effectively
* specified to be in pg_temp, so no need for anything extra in that case.
Fix a couple of misbehaviors rooted in the fact that the default creation namespace isn't necessarily first in the search path (there could be implicit schemas ahead of it). Examples are test=# set search_path TO s1; test=# create view pg_timezone_names as select * from pg_timezone_names(); ERROR: "pg_timezone_names" is already a view test=# create table pg_class (f1 int primary key); ERROR: permission denied: "pg_class" is a system catalog You'd expect these commands to create the requested objects in s1, since names beginning with pg_ aren't supposed to be reserved anymore. What is happening is that we create the requested base table and then execute additional commands (here, CREATE RULE or CREATE INDEX), and that code is passed the same RangeVar that was in the original command. Since that RangeVar has schemaname = NULL, the secondary commands think they should do a path search, and that means they find system catalogs that are implicitly in front of s1 in the search path. This is perilously close to being a security hole: if the secondary command failed to apply a permission check then it'd be possible for unprivileged users to make schema modifications to system catalogs. But as far as I can find, there is no code path in which a check doesn't occur. Which makes it just a weird corner-case bug for people who are silly enough to want to name their tables the same as a system catalog. The relevant code has changed quite a bit since 8.2, which means this patch wouldn't work as-is in the back branches. Since it's a corner case no one has reported from the field, I'm not going to bother trying to back-patch.
2007-08-27 05:36:08 +02:00
*/
if (stmt->relation->schemaname == NULL
&& stmt->relation->relpersistence != RELPERSISTENCE_TEMP)
Fix a couple of misbehaviors rooted in the fact that the default creation namespace isn't necessarily first in the search path (there could be implicit schemas ahead of it). Examples are test=# set search_path TO s1; test=# create view pg_timezone_names as select * from pg_timezone_names(); ERROR: "pg_timezone_names" is already a view test=# create table pg_class (f1 int primary key); ERROR: permission denied: "pg_class" is a system catalog You'd expect these commands to create the requested objects in s1, since names beginning with pg_ aren't supposed to be reserved anymore. What is happening is that we create the requested base table and then execute additional commands (here, CREATE RULE or CREATE INDEX), and that code is passed the same RangeVar that was in the original command. Since that RangeVar has schemaname = NULL, the secondary commands think they should do a path search, and that means they find system catalogs that are implicitly in front of s1 in the search path. This is perilously close to being a security hole: if the secondary command failed to apply a permission check then it'd be possible for unprivileged users to make schema modifications to system catalogs. But as far as I can find, there is no code path in which a check doesn't occur. Which makes it just a weird corner-case bug for people who are silly enough to want to name their tables the same as a system catalog. The relevant code has changed quite a bit since 8.2, which means this patch wouldn't work as-is in the back branches. Since it's a corner case no one has reported from the field, I'm not going to bother trying to back-patch.
2007-08-27 05:36:08 +02:00
stmt->relation->schemaname = get_namespace_name(namespaceid);
/* Set up CreateStmtContext */
cxt.pstate = pstate;
if (IsA(stmt, CreateForeignTableStmt))
{
cxt.stmtType = "CREATE FOREIGN TABLE";
cxt.isforeign = true;
}
else
{
cxt.stmtType = "CREATE TABLE";
cxt.isforeign = false;
}
cxt.relation = stmt->relation;
cxt.rel = NULL;
cxt.inhRelations = stmt->inhRelations;
cxt.isalter = false;
cxt.columns = NIL;
cxt.ckconstraints = NIL;
cxt.fkconstraints = NIL;
cxt.ixconstraints = NIL;
cxt.inh_indexes = NIL;
cxt.extstats = NIL;
cxt.blist = NIL;
cxt.alist = NIL;
cxt.pkey = NULL;
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
cxt.ispartitioned = stmt->partspec != NULL;
cxt.partbound = stmt->partbound;
cxt.ofType = (stmt->ofTypename != NULL);
2010-02-26 03:01:40 +01:00
Assert(!stmt->ofTypename || !stmt->inhRelations); /* grammar enforces */
if (stmt->ofTypename)
transformOfType(&cxt, stmt->ofTypename);
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
if (stmt->partspec)
{
if (stmt->inhRelations && !stmt->partbound)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("cannot create partitioned table as inheritance child")));
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
}
/*
* Run through each primary element in the table creation clause. Separate
* column defs from constraints, and do preliminary analysis.
*/
foreach(elements, stmt->tableElts)
{
Node *element = lfirst(elements);
switch (nodeTag(element))
{
case T_ColumnDef:
transformColumnDefinition(&cxt, (ColumnDef *) element);
break;
case T_Constraint:
transformTableConstraint(&cxt, (Constraint *) element);
break;
case T_TableLikeClause:
transformTableLikeClause(&cxt, (TableLikeClause *) element);
break;
default:
elog(ERROR, "unrecognized node type: %d",
(int) nodeTag(element));
break;
}
}
/*
* transformIndexConstraints wants cxt.alist to contain only index
* statements, so transfer anything we already have into save_alist.
*/
save_alist = cxt.alist;
cxt.alist = NIL;
Assert(stmt->constraints == NIL);
/*
* Postprocess constraints that give rise to index definitions.
*/
transformIndexConstraints(&cxt);
/*
* Postprocess foreign-key constraints.
*/
transformFKConstraints(&cxt, true, false);
/*
* Postprocess check constraints.
*/
transformCheckConstraints(&cxt, !is_foreign_table ? true : false);
/*
* Postprocess extended statistics.
*/
transformExtendedStatistics(&cxt);
/*
* Output results.
*/
stmt->tableElts = cxt.columns;
stmt->constraints = cxt.ckconstraints;
result = lappend(cxt.blist, stmt);
result = list_concat(result, cxt.alist);
result = list_concat(result, save_alist);
return result;
}
/*
* generateSerialExtraStmts
* Generate CREATE SEQUENCE and ALTER SEQUENCE ... OWNED BY statements
* to create the sequence for a serial or identity column.
*
* This includes determining the name the sequence will have. The caller
* can ask to get back the name components by passing non-null pointers
* for snamespace_p and sname_p.
*/
static void
generateSerialExtraStmts(CreateStmtContext *cxt, ColumnDef *column,
Oid seqtypid, List *seqoptions, bool for_identity,
char **snamespace_p, char **sname_p)
{
ListCell *option;
DefElem *nameEl = NULL;
Oid snamespaceid;
char *snamespace;
char *sname;
CreateSeqStmt *seqstmt;
AlterSeqStmt *altseqstmt;
List *attnamelist;
/*
* Determine namespace and name to use for the sequence.
*
* First, check if a sequence name was passed in as an option. This is
* used by pg_dump. Else, generate a name.
*
* Although we use ChooseRelationName, it's not guaranteed that the
* selected sequence name won't conflict; given sufficiently long field
* names, two different serial columns in the same table could be assigned
* the same sequence name, and we'd not notice since we aren't creating
* the sequence quite yet. In practice this seems quite unlikely to be a
* problem, especially since few people would need two serial columns in
* one table.
*/
foreach(option, seqoptions)
{
DefElem *defel = lfirst_node(DefElem, option);
if (strcmp(defel->defname, "sequence_name") == 0)
{
if (nameEl)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("conflicting or redundant options")));
nameEl = defel;
}
}
if (nameEl)
{
RangeVar *rv = makeRangeVarFromNameList(castNode(List, nameEl->arg));
snamespace = rv->schemaname;
if (!snamespace)
{
/* Given unqualified SEQUENCE NAME, select namespace */
if (cxt->rel)
snamespaceid = RelationGetNamespace(cxt->rel);
else
snamespaceid = RangeVarGetCreationNamespace(cxt->relation);
snamespace = get_namespace_name(snamespaceid);
}
sname = rv->relname;
/* Remove the SEQUENCE NAME item from seqoptions */
seqoptions = list_delete_ptr(seqoptions, nameEl);
}
else
{
if (cxt->rel)
snamespaceid = RelationGetNamespace(cxt->rel);
else
{
snamespaceid = RangeVarGetCreationNamespace(cxt->relation);
RangeVarAdjustRelationPersistence(cxt->relation, snamespaceid);
}
snamespace = get_namespace_name(snamespaceid);
sname = ChooseRelationName(cxt->relation->relname,
column->colname,
"seq",
Fully enforce uniqueness of constraint names. It's been true for a long time that we expect names of table and domain constraints to be unique among the constraints of that table or domain. However, the enforcement of that has been pretty haphazard, and it missed some corner cases such as creating a CHECK constraint and then an index constraint of the same name (as per recent report from André Hänsel). Also, due to the lack of an actual unique index enforcing this, duplicates could be created through race conditions. Moreover, the code that searches pg_constraint has been quite inconsistent about how to handle duplicate names if one did occur: some places checked and threw errors if there was more than one match, while others just processed the first match they came to. To fix, create a unique index on (conrelid, contypid, conname). Since either conrelid or contypid is zero, this will separately enforce uniqueness of constraint names among constraints of any one table and any one domain. (If we ever implement SQL assertions, and put them into this catalog, more thought might be needed. But it'd be at least as reasonable to put them into a new catalog; having overloaded this one catalog with two kinds of constraints was a mistake already IMO.) This index can replace the existing non-unique index on conrelid, though we need to keep the one on contypid for query performance reasons. Having done that, we can simplify the logic in various places that either coped with duplicates or neglected to, as well as potentially improve lookup performance when searching for a constraint by name. Also, as per our usual practice, install a preliminary check so that you get something more friendly than a unique-index violation report in the case complained of by André. And teach ChooseIndexName to avoid choosing autogenerated names that would draw such a failure. While it's not possible to make such a change in the back branches, it doesn't seem quite too late to put this into v11, so do so. Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
snamespaceid,
false);
}
ereport(DEBUG1,
(errmsg("%s will create implicit sequence \"%s\" for serial column \"%s.%s\"",
cxt->stmtType, sname,
cxt->relation->relname, column->colname)));
/*
* Build a CREATE SEQUENCE command to create the sequence object, and add
* it to the list of things to be done before this CREATE/ALTER TABLE.
*/
seqstmt = makeNode(CreateSeqStmt);
seqstmt->for_identity = for_identity;
seqstmt->sequence = makeRangeVar(snamespace, sname, -1);
seqstmt->options = seqoptions;
/*
* If a sequence data type was specified, add it to the options. Prepend
* to the list rather than append; in case a user supplied their own AS
* clause, the "redundant options" error will point to their occurrence,
* not our synthetic one.
*/
if (seqtypid)
seqstmt->options = lcons(makeDefElem("as",
(Node *) makeTypeNameFromOid(seqtypid, -1),
-1),
seqstmt->options);
/*
* If this is ALTER ADD COLUMN, make sure the sequence will be owned by
* the table's owner. The current user might be someone else (perhaps a
* superuser, or someone who's only a member of the owning role), but the
* SEQUENCE OWNED BY mechanisms will bleat unless table and sequence have
* exactly the same owning role.
*/
if (cxt->rel)
seqstmt->ownerId = cxt->rel->rd_rel->relowner;
else
seqstmt->ownerId = InvalidOid;
cxt->blist = lappend(cxt->blist, seqstmt);
/*
* Store the identity sequence name that we decided on. ALTER TABLE ...
* ADD COLUMN ... IDENTITY needs this so that it can fill the new column
* with values from the sequence, while the association of the sequence
* with the table is not set until after the ALTER TABLE.
*/
column->identitySequence = seqstmt->sequence;
/*
* Build an ALTER SEQUENCE ... OWNED BY command to mark the sequence as
* owned by this column, and add it to the list of things to be done after
* this CREATE/ALTER TABLE.
*/
altseqstmt = makeNode(AlterSeqStmt);
altseqstmt->sequence = makeRangeVar(snamespace, sname, -1);
attnamelist = list_make3(makeString(snamespace),
makeString(cxt->relation->relname),
makeString(column->colname));
altseqstmt->options = list_make1(makeDefElem("owned_by",
(Node *) attnamelist, -1));
altseqstmt->for_identity = for_identity;
cxt->alist = lappend(cxt->alist, altseqstmt);
if (snamespace_p)
*snamespace_p = snamespace;
if (sname_p)
*sname_p = sname;
}
/*
* transformColumnDefinition -
* transform a single ColumnDef within CREATE TABLE
* Also used in ALTER TABLE ADD COLUMN
*/
static void
transformColumnDefinition(CreateStmtContext *cxt, ColumnDef *column)
{
bool is_serial;
bool saw_nullable;
bool saw_default;
bool saw_identity;
ListCell *clist;
cxt->columns = lappend(cxt->columns, column);
/* Check for SERIAL pseudo-types */
is_serial = false;
if (column->typeName
&& list_length(column->typeName->names) == 1
&& !column->typeName->pct_type)
{
char *typname = strVal(linitial(column->typeName->names));
if (strcmp(typname, "smallserial") == 0 ||
strcmp(typname, "serial2") == 0)
{
is_serial = true;
column->typeName->names = NIL;
column->typeName->typeOid = INT2OID;
}
else if (strcmp(typname, "serial") == 0 ||
strcmp(typname, "serial4") == 0)
{
is_serial = true;
column->typeName->names = NIL;
column->typeName->typeOid = INT4OID;
}
else if (strcmp(typname, "bigserial") == 0 ||
strcmp(typname, "serial8") == 0)
{
is_serial = true;
column->typeName->names = NIL;
column->typeName->typeOid = INT8OID;
}
/*
* We have to reject "serial[]" explicitly, because once we've set
* typeid, LookupTypeName won't notice arrayBounds. We don't need any
* special coding for serial(typmod) though.
*/
if (is_serial && column->typeName->arrayBounds != NIL)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("array of serial is not implemented"),
parser_errposition(cxt->pstate,
column->typeName->location)));
}
/* Do necessary work on the column type declaration */
if (column->typeName)
transformColumnType(cxt, column);
/* Special actions for SERIAL pseudo-types */
if (is_serial)
{
char *snamespace;
char *sname;
char *qstring;
A_Const *snamenode;
TypeCast *castnode;
FuncCall *funccallnode;
Constraint *constraint;
generateSerialExtraStmts(cxt, column,
column->typeName->typeOid, NIL, false,
&snamespace, &sname);
/*
* Create appropriate constraints for SERIAL. We do this in full,
* rather than shortcutting, so that we will detect any conflicting
* constraints the user wrote (like a different DEFAULT).
*
* Create an expression tree representing the function call
* nextval('sequencename'). We cannot reduce the raw tree to cooked
* form until after the sequence is created, but there's no need to do
* so.
*/
qstring = quote_qualified_identifier(snamespace, sname);
snamenode = makeNode(A_Const);
snamenode->val.type = T_String;
snamenode->val.val.str = qstring;
snamenode->location = -1;
castnode = makeNode(TypeCast);
castnode->typeName = SystemTypeName("regclass");
castnode->arg = (Node *) snamenode;
castnode->location = -1;
funccallnode = makeFuncCall(SystemFuncName("nextval"),
list_make1(castnode),
-1);
constraint = makeNode(Constraint);
constraint->contype = CONSTR_DEFAULT;
constraint->location = -1;
constraint->raw_expr = (Node *) funccallnode;
constraint->cooked_expr = NULL;
column->constraints = lappend(column->constraints, constraint);
constraint = makeNode(Constraint);
constraint->contype = CONSTR_NOTNULL;
constraint->location = -1;
column->constraints = lappend(column->constraints, constraint);
}
/* Process column constraints, if any... */
transformConstraintAttrs(cxt, column->constraints);
saw_nullable = false;
saw_default = false;
saw_identity = false;
foreach(clist, column->constraints)
{
Constraint *constraint = lfirst_node(Constraint, clist);
switch (constraint->contype)
{
case CONSTR_NULL:
if (saw_nullable && column->is_not_null)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("conflicting NULL/NOT NULL declarations for column \"%s\" of table \"%s\"",
column->colname, cxt->relation->relname),
parser_errposition(cxt->pstate,
constraint->location)));
column->is_not_null = false;
saw_nullable = true;
break;
case CONSTR_NOTNULL:
if (saw_nullable && !column->is_not_null)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("conflicting NULL/NOT NULL declarations for column \"%s\" of table \"%s\"",
column->colname, cxt->relation->relname),
parser_errposition(cxt->pstate,
constraint->location)));
column->is_not_null = true;
saw_nullable = true;
break;
case CONSTR_DEFAULT:
if (saw_default)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("multiple default values specified for column \"%s\" of table \"%s\"",
column->colname, cxt->relation->relname),
parser_errposition(cxt->pstate,
constraint->location)));
column->raw_default = constraint->raw_expr;
Assert(constraint->cooked_expr == NULL);
saw_default = true;
break;
case CONSTR_IDENTITY:
{
Type ctype;
Oid typeOid;
if (cxt->ofType)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("identity columns are not supported on typed tables")));
if (cxt->partbound)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
2017-12-09 11:40:31 +01:00
errmsg("identity columns are not supported on partitions")));
ctype = typenameType(cxt->pstate, column->typeName, NULL);
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
typeOid = ((Form_pg_type) GETSTRUCT(ctype))->oid;
ReleaseSysCache(ctype);
if (saw_identity)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("multiple identity specifications for column \"%s\" of table \"%s\"",
column->colname, cxt->relation->relname),
parser_errposition(cxt->pstate,
constraint->location)));
generateSerialExtraStmts(cxt, column,
typeOid, constraint->options, true,
NULL, NULL);
column->identity = constraint->generated_when;
saw_identity = true;
column->is_not_null = true;
break;
}
case CONSTR_CHECK:
cxt->ckconstraints = lappend(cxt->ckconstraints, constraint);
break;
case CONSTR_PRIMARY:
if (cxt->isforeign)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("primary key constraints are not supported on foreign tables"),
parser_errposition(cxt->pstate,
constraint->location)));
/* FALL THRU */
case CONSTR_UNIQUE:
if (cxt->isforeign)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("unique constraints are not supported on foreign tables"),
parser_errposition(cxt->pstate,
constraint->location)));
if (constraint->keys == NIL)
constraint->keys = list_make1(makeString(column->colname));
cxt->ixconstraints = lappend(cxt->ixconstraints, constraint);
break;
case CONSTR_EXCLUSION:
/* grammar does not allow EXCLUDE as a column constraint */
elog(ERROR, "column exclusion constraints are not supported");
break;
case CONSTR_FOREIGN:
if (cxt->isforeign)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("foreign key constraints are not supported on foreign tables"),
parser_errposition(cxt->pstate,
constraint->location)));
/*
* Fill in the current attribute's name and throw it into the
* list of FK constraints to be processed later.
*/
constraint->fk_attrs = list_make1(makeString(column->colname));
cxt->fkconstraints = lappend(cxt->fkconstraints, constraint);
break;
case CONSTR_ATTR_DEFERRABLE:
case CONSTR_ATTR_NOT_DEFERRABLE:
case CONSTR_ATTR_DEFERRED:
case CONSTR_ATTR_IMMEDIATE:
/* transformConstraintAttrs took care of these */
break;
default:
elog(ERROR, "unrecognized constraint type: %d",
constraint->contype);
break;
}
if (saw_default && saw_identity)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("both default and identity specified for column \"%s\" of table \"%s\"",
column->colname, cxt->relation->relname),
parser_errposition(cxt->pstate,
constraint->location)));
}
/*
* If needed, generate ALTER FOREIGN TABLE ALTER COLUMN statement to add
* per-column foreign data wrapper options to this column after creation.
*/
if (column->fdwoptions != NIL)
{
AlterTableStmt *stmt;
AlterTableCmd *cmd;
cmd = makeNode(AlterTableCmd);
cmd->subtype = AT_AlterColumnGenericOptions;
cmd->name = column->colname;
cmd->def = (Node *) column->fdwoptions;
cmd->behavior = DROP_RESTRICT;
cmd->missing_ok = false;
stmt = makeNode(AlterTableStmt);
stmt->relation = cxt->relation;
stmt->cmds = NIL;
stmt->relkind = OBJECT_FOREIGN_TABLE;
stmt->cmds = lappend(stmt->cmds, cmd);
cxt->alist = lappend(cxt->alist, stmt);
}
}
/*
* transformTableConstraint
* transform a Constraint node within CREATE TABLE or ALTER TABLE
*/
static void
transformTableConstraint(CreateStmtContext *cxt, Constraint *constraint)
{
switch (constraint->contype)
{
case CONSTR_PRIMARY:
if (cxt->isforeign)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("primary key constraints are not supported on foreign tables"),
parser_errposition(cxt->pstate,
constraint->location)));
cxt->ixconstraints = lappend(cxt->ixconstraints, constraint);
break;
case CONSTR_UNIQUE:
if (cxt->isforeign)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("unique constraints are not supported on foreign tables"),
parser_errposition(cxt->pstate,
constraint->location)));
cxt->ixconstraints = lappend(cxt->ixconstraints, constraint);
break;
case CONSTR_EXCLUSION:
if (cxt->isforeign)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("exclusion constraints are not supported on foreign tables"),
parser_errposition(cxt->pstate,
constraint->location)));
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
if (cxt->ispartitioned)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("exclusion constraints are not supported on partitioned tables"),
parser_errposition(cxt->pstate,
constraint->location)));
cxt->ixconstraints = lappend(cxt->ixconstraints, constraint);
break;
case CONSTR_CHECK:
cxt->ckconstraints = lappend(cxt->ckconstraints, constraint);
break;
case CONSTR_FOREIGN:
if (cxt->isforeign)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("foreign key constraints are not supported on foreign tables"),
parser_errposition(cxt->pstate,
constraint->location)));
cxt->fkconstraints = lappend(cxt->fkconstraints, constraint);
break;
case CONSTR_NULL:
case CONSTR_NOTNULL:
case CONSTR_DEFAULT:
case CONSTR_ATTR_DEFERRABLE:
case CONSTR_ATTR_NOT_DEFERRABLE:
case CONSTR_ATTR_DEFERRED:
case CONSTR_ATTR_IMMEDIATE:
elog(ERROR, "invalid context for constraint type %d",
constraint->contype);
break;
default:
elog(ERROR, "unrecognized constraint type: %d",
constraint->contype);
break;
}
}
/*
* transformTableLikeClause
*
* Change the LIKE <srctable> portion of a CREATE TABLE statement into
* column definitions which recreate the user defined column portions of
* <srctable>.
*/
static void
transformTableLikeClause(CreateStmtContext *cxt, TableLikeClause *table_like_clause)
{
AttrNumber parent_attno;
Relation relation;
TupleDesc tupleDesc;
TupleConstr *constr;
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
AttrNumber *attmap;
AclResult aclresult;
char *comment;
ParseCallbackState pcbstate;
setup_parser_errposition_callback(&pcbstate, cxt->pstate,
table_like_clause->relation->location);
/* we could support LIKE in many cases, but worry about it another day */
if (cxt->isforeign)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("LIKE is not supported for creating foreign tables")));
relation = relation_openrv(table_like_clause->relation, AccessShareLock);
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
if (relation->rd_rel->relkind != RELKIND_RELATION &&
relation->rd_rel->relkind != RELKIND_VIEW &&
relation->rd_rel->relkind != RELKIND_MATVIEW &&
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
relation->rd_rel->relkind != RELKIND_COMPOSITE_TYPE &&
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
relation->rd_rel->relkind != RELKIND_FOREIGN_TABLE &&
relation->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("\"%s\" is not a table, view, materialized view, composite type, or foreign table",
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
RelationGetRelationName(relation))));
cancel_parser_errposition_callback(&pcbstate);
/*
* Check for privileges
*/
if (relation->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)
{
aclresult = pg_type_aclcheck(relation->rd_rel->reltype, GetUserId(),
ACL_USAGE);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, OBJECT_TYPE,
RelationGetRelationName(relation));
}
else
{
aclresult = pg_class_aclcheck(RelationGetRelid(relation), GetUserId(),
ACL_SELECT);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, get_relkind_objtype(relation->rd_rel->relkind),
RelationGetRelationName(relation));
}
tupleDesc = RelationGetDescr(relation);
constr = tupleDesc->constr;
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
/*
* Initialize column number map for map_variable_attnos(). We need this
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
* since dropped columns in the source table aren't copied, so the new
* table can have different column numbers.
*/
attmap = (AttrNumber *) palloc0(sizeof(AttrNumber) * tupleDesc->natts);
/*
2007-11-15 22:14:46 +01:00
* Insert the copied attributes into the cxt for the new table definition.
*/
for (parent_attno = 1; parent_attno <= tupleDesc->natts;
parent_attno++)
{
Form_pg_attribute attribute = TupleDescAttr(tupleDesc,
parent_attno - 1);
char *attributeName = NameStr(attribute->attname);
ColumnDef *def;
/*
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
* Ignore dropped columns in the parent. attmap entry is left zero.
*/
if (attribute->attisdropped)
continue;
/*
* Create a new column, which is marked as NOT inherited.
*
* For constraints, ONLY the NOT NULL constraint is inherited by the
* new column definition per SQL99.
*/
def = makeNode(ColumnDef);
def->colname = pstrdup(attributeName);
def->typeName = makeTypeNameFromOid(attribute->atttypid,
Remove collation information from TypeName, where it does not belong. The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
2011-03-10 04:38:52 +01:00
attribute->atttypmod);
def->inhcount = 0;
def->is_local = true;
def->is_not_null = attribute->attnotnull;
Remove collation information from TypeName, where it does not belong. The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
2011-03-10 04:38:52 +01:00
def->is_from_type = false;
def->storage = 0;
def->raw_default = NULL;
def->cooked_default = NULL;
Remove collation information from TypeName, where it does not belong. The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
2011-03-10 04:38:52 +01:00
def->collClause = NULL;
def->collOid = attribute->attcollation;
def->constraints = NIL;
def->location = -1;
/*
* Add to column list
*/
cxt->columns = lappend(cxt->columns, def);
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
attmap[parent_attno - 1] = list_length(cxt->columns);
/*
* Copy default, if present and the default has been requested
*/
if (attribute->atthasdef &&
(table_like_clause->options & CREATE_TABLE_LIKE_DEFAULTS))
{
Node *this_default = NULL;
AttrDefault *attrdef;
int i;
/* Find default in constraint structure */
Assert(constr != NULL);
attrdef = constr->defval;
for (i = 0; i < constr->num_defval; i++)
{
if (attrdef[i].adnum == parent_attno)
{
this_default = stringToNode(attrdef[i].adbin);
break;
}
}
Assert(this_default != NULL);
/*
* If default expr could contain any vars, we'd need to fix 'em,
* but it can't; so default is ready to apply to child.
*/
def->cooked_default = this_default;
}
/*
* Copy identity if requested
*/
if (attribute->attidentity &&
(table_like_clause->options & CREATE_TABLE_LIKE_IDENTITY))
{
Oid seq_relid;
List *seq_options;
/*
* find sequence owned by old column; extract sequence parameters;
* build new create sequence command
*/
seq_relid = getOwnedSequence(RelationGetRelid(relation), attribute->attnum);
seq_options = sequence_options(seq_relid);
generateSerialExtraStmts(cxt, def,
InvalidOid, seq_options, true,
NULL, NULL);
def->identity = attribute->attidentity;
}
/* Likewise, copy storage if requested */
if (table_like_clause->options & CREATE_TABLE_LIKE_STORAGE)
def->storage = attribute->attstorage;
else
def->storage = 0;
/* Likewise, copy comment if requested */
if ((table_like_clause->options & CREATE_TABLE_LIKE_COMMENTS) &&
(comment = GetComment(attribute->attrelid,
RelationRelationId,
attribute->attnum)) != NULL)
{
CommentStmt *stmt = makeNode(CommentStmt);
stmt->objtype = OBJECT_COLUMN;
stmt->object = (Node *) list_make3(makeString(cxt->relation->schemaname),
makeString(cxt->relation->relname),
makeString(def->colname));
stmt->comment = comment;
cxt->alist = lappend(cxt->alist, stmt);
}
}
/*
2007-11-15 22:14:46 +01:00
* Copy CHECK constraints if requested, being careful to adjust attribute
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
* numbers so they match the child.
*/
if ((table_like_clause->options & CREATE_TABLE_LIKE_CONSTRAINTS) &&
tupleDesc->constr)
{
int ccnum;
for (ccnum = 0; ccnum < tupleDesc->constr->num_check; ccnum++)
{
char *ccname = tupleDesc->constr->check[ccnum].ccname;
char *ccbin = tupleDesc->constr->check[ccnum].ccbin;
Constraint *n = makeNode(Constraint);
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
Node *ccbin_node;
bool found_whole_row;
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
ccbin_node = map_variable_attnos(stringToNode(ccbin),
1, 0,
attmap, tupleDesc->natts,
InvalidOid, &found_whole_row);
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
/*
* We reject whole-row variables because the whole point of LIKE
* is that the new table's rowtype might later diverge from the
* parent's. So, while translation might be possible right now,
* it wouldn't be possible to guarantee it would work in future.
*/
if (found_whole_row)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot convert whole-row table reference"),
errdetail("Constraint \"%s\" contains a whole-row reference to table \"%s\".",
ccname,
RelationGetRelationName(relation))));
n->contype = CONSTR_CHECK;
n->location = -1;
n->conname = pstrdup(ccname);
n->raw_expr = NULL;
n->cooked_expr = nodeToString(ccbin_node);
cxt->ckconstraints = lappend(cxt->ckconstraints, n);
/* Copy comment on constraint */
if ((table_like_clause->options & CREATE_TABLE_LIKE_COMMENTS) &&
(comment = GetComment(get_relation_constraint_oid(RelationGetRelid(relation),
n->conname, false),
ConstraintRelationId,
0)) != NULL)
{
CommentStmt *stmt = makeNode(CommentStmt);
stmt->objtype = OBJECT_TABCONSTRAINT;
stmt->object = (Node *) list_make3(makeString(cxt->relation->schemaname),
makeString(cxt->relation->relname),
makeString(n->conname));
stmt->comment = comment;
cxt->alist = lappend(cxt->alist, stmt);
}
}
}
/*
* Likewise, copy indexes if requested
*/
if ((table_like_clause->options & CREATE_TABLE_LIKE_INDEXES) &&
relation->rd_rel->relhasindex)
{
List *parent_indexes;
ListCell *l;
parent_indexes = RelationGetIndexList(relation);
foreach(l, parent_indexes)
{
2007-11-15 22:14:46 +01:00
Oid parent_index_oid = lfirst_oid(l);
Relation parent_index;
IndexStmt *index_stmt;
parent_index = index_open(parent_index_oid, AccessShareLock);
/* Build CREATE INDEX statement to recreate the parent_index */
index_stmt = generateClonedIndexStmt(cxt->relation, InvalidOid,
parent_index,
attmap, tupleDesc->natts, NULL);
/* Copy comment on index, if requested */
if (table_like_clause->options & CREATE_TABLE_LIKE_COMMENTS)
{
comment = GetComment(parent_index_oid, RelationRelationId, 0);
/*
* We make use of IndexStmt's idxcomment option, so as not to
* need to know now what name the index will have.
*/
index_stmt->idxcomment = comment;
}
/* Save it in the inh_indexes list for the time being */
cxt->inh_indexes = lappend(cxt->inh_indexes, index_stmt);
index_close(parent_index, AccessShareLock);
}
}
/*
* Likewise, copy extended statistics if requested
*/
if (table_like_clause->options & CREATE_TABLE_LIKE_STATISTICS)
{
List *parent_extstats;
ListCell *l;
parent_extstats = RelationGetStatExtList(relation);
foreach(l, parent_extstats)
{
Oid parent_stat_oid = lfirst_oid(l);
CreateStatsStmt *stats_stmt;
stats_stmt = generateClonedExtStatsStmt(cxt->relation,
RelationGetRelid(relation),
parent_stat_oid);
/* Copy comment on statistics object, if requested */
if (table_like_clause->options & CREATE_TABLE_LIKE_COMMENTS)
{
comment = GetComment(parent_stat_oid, StatisticExtRelationId, 0);
/*
* We make use of CreateStatsStmt's stxcomment option, so as
* not to need to know now what name the statistics will have.
*/
stats_stmt->stxcomment = comment;
}
cxt->extstats = lappend(cxt->extstats, stats_stmt);
}
list_free(parent_extstats);
}
/*
* Close the parent rel, but keep our AccessShareLock on it until xact
* commit. That will prevent someone else from deleting or ALTERing the
* parent before the child is committed.
*/
table_close(relation, NoLock);
}
static void
transformOfType(CreateStmtContext *cxt, TypeName *ofTypename)
{
HeapTuple tuple;
TupleDesc tupdesc;
int i;
Oid ofTypeId;
AssertArg(ofTypename);
Remove collation information from TypeName, where it does not belong. The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
2011-03-10 04:38:52 +01:00
tuple = typenameType(NULL, ofTypename, NULL);
check_of_type(tuple);
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
ofTypeId = ((Form_pg_type) GETSTRUCT(tuple))->oid;
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
ofTypename->typeOid = ofTypeId; /* cached for later */
tupdesc = lookup_rowtype_tupdesc(ofTypeId, -1);
for (i = 0; i < tupdesc->natts; i++)
{
Form_pg_attribute attr = TupleDescAttr(tupdesc, i);
Remove collation information from TypeName, where it does not belong. The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
2011-03-10 04:38:52 +01:00
ColumnDef *n;
if (attr->attisdropped)
continue;
Remove collation information from TypeName, where it does not belong. The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
2011-03-10 04:38:52 +01:00
n = makeNode(ColumnDef);
n->colname = pstrdup(NameStr(attr->attname));
Remove collation information from TypeName, where it does not belong. The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
2011-03-10 04:38:52 +01:00
n->typeName = makeTypeNameFromOid(attr->atttypid, attr->atttypmod);
n->inhcount = 0;
n->is_local = true;
Remove collation information from TypeName, where it does not belong. The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
2011-03-10 04:38:52 +01:00
n->is_not_null = false;
n->is_from_type = true;
Remove collation information from TypeName, where it does not belong. The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
2011-03-10 04:38:52 +01:00
n->storage = 0;
n->raw_default = NULL;
n->cooked_default = NULL;
n->collClause = NULL;
n->collOid = attr->attcollation;
n->constraints = NIL;
n->location = -1;
cxt->columns = lappend(cxt->columns, n);
}
DecrTupleDescRefCount(tupdesc);
ReleaseSysCache(tuple);
}
/*
* Generate an IndexStmt node using information from an already existing index
* "source_idx", for the rel identified either by heapRel or heapRelid.
*
* Attribute numbers should be adjusted according to attmap.
*/
IndexStmt *
generateClonedIndexStmt(RangeVar *heapRel, Oid heapRelid, Relation source_idx,
const AttrNumber *attmap, int attmap_length, Oid *constraintOid)
{
Oid source_relid = RelationGetRelid(source_idx);
2007-11-15 22:14:46 +01:00
HeapTuple ht_idxrel;
HeapTuple ht_idx;
HeapTuple ht_am;
2007-11-15 22:14:46 +01:00
Form_pg_class idxrelrec;
Form_pg_index idxrec;
2007-11-15 22:14:46 +01:00
Form_pg_am amrec;
oidvector *indcollation;
oidvector *indclass;
IndexStmt *index;
List *indexprs;
2007-11-15 22:14:46 +01:00
ListCell *indexpr_item;
Oid indrelid;
int keyno;
Oid keycoltype;
Datum datum;
2007-11-15 22:14:46 +01:00
bool isnull;
Assert((heapRel == NULL && OidIsValid(heapRelid)) ||
(heapRel != NULL && !OidIsValid(heapRelid)));
/*
* Fetch pg_class tuple of source index. We can't use the copy in the
* relcache entry because it doesn't include optional fields.
*/
ht_idxrel = SearchSysCache1(RELOID, ObjectIdGetDatum(source_relid));
if (!HeapTupleIsValid(ht_idxrel))
elog(ERROR, "cache lookup failed for relation %u", source_relid);
idxrelrec = (Form_pg_class) GETSTRUCT(ht_idxrel);
/* Fetch pg_index tuple for source index from relcache entry */
ht_idx = source_idx->rd_indextuple;
idxrec = (Form_pg_index) GETSTRUCT(ht_idx);
indrelid = idxrec->indrelid;
/* Fetch the pg_am tuple of the index' access method */
ht_am = SearchSysCache1(AMOID, ObjectIdGetDatum(idxrelrec->relam));
if (!HeapTupleIsValid(ht_am))
elog(ERROR, "cache lookup failed for access method %u",
idxrelrec->relam);
amrec = (Form_pg_am) GETSTRUCT(ht_am);
/* Extract indcollation from the pg_index tuple */
datum = SysCacheGetAttr(INDEXRELID, ht_idx,
Anum_pg_index_indcollation, &isnull);
Assert(!isnull);
indcollation = (oidvector *) DatumGetPointer(datum);
/* Extract indclass from the pg_index tuple */
datum = SysCacheGetAttr(INDEXRELID, ht_idx,
Anum_pg_index_indclass, &isnull);
Assert(!isnull);
indclass = (oidvector *) DatumGetPointer(datum);
/* Begin building the IndexStmt */
index = makeNode(IndexStmt);
index->relation = heapRel;
index->accessMethod = pstrdup(NameStr(amrec->amname));
if (OidIsValid(idxrelrec->reltablespace))
index->tableSpace = get_tablespace_name(idxrelrec->reltablespace);
else
index->tableSpace = NULL;
index->excludeOpNames = NIL;
index->idxcomment = NULL;
index->indexOid = InvalidOid;
index->oldNode = InvalidOid;
index->unique = idxrec->indisunique;
index->primary = idxrec->indisprimary;
Get rid of multiple applications of transformExpr() to the same tree. transformExpr() has for many years had provisions to do nothing when applied to an already-transformed expression tree. However, this was always ugly and of dubious reliability, so we'd be much better off without it. The primary historical reason for it was that gram.y sometimes returned multiple links to the same subexpression, which is no longer true as of my BETWEEN fixes. We'd also grown some lazy hacks in CREATE TABLE LIKE (failing to distinguish between raw and already-transformed index specifications) and one or two other places. This patch removes the need for and support for re-transforming already transformed expressions. The index case is dealt with by adding a flag to struct IndexStmt to indicate that it's already been transformed; which has some benefit anyway in that tablecmds.c can now Assert that transformation has happened rather than just assuming. The other main reason was some rather sloppy code for array type coercion, which can be fixed (and its performance improved too) by refactoring. I did leave transformJoinUsingClause() still constructing expressions containing untransformed operator nodes being applied to Vars, so that transformExpr() still has to allow Var inputs. But that's a much narrower, and safer, special case than before, since Vars will never appear in a raw parse tree, and they don't have any substructure to worry about. In passing fix some oversights in the patch that added CREATE INDEX IF NOT EXISTS (missing processing of IndexStmt.if_not_exists). These appear relatively harmless, but still sloppy coding practice.
2015-02-22 19:59:09 +01:00
index->transformed = true; /* don't need transformIndexStmt */
index->concurrent = false;
Get rid of multiple applications of transformExpr() to the same tree. transformExpr() has for many years had provisions to do nothing when applied to an already-transformed expression tree. However, this was always ugly and of dubious reliability, so we'd be much better off without it. The primary historical reason for it was that gram.y sometimes returned multiple links to the same subexpression, which is no longer true as of my BETWEEN fixes. We'd also grown some lazy hacks in CREATE TABLE LIKE (failing to distinguish between raw and already-transformed index specifications) and one or two other places. This patch removes the need for and support for re-transforming already transformed expressions. The index case is dealt with by adding a flag to struct IndexStmt to indicate that it's already been transformed; which has some benefit anyway in that tablecmds.c can now Assert that transformation has happened rather than just assuming. The other main reason was some rather sloppy code for array type coercion, which can be fixed (and its performance improved too) by refactoring. I did leave transformJoinUsingClause() still constructing expressions containing untransformed operator nodes being applied to Vars, so that transformExpr() still has to allow Var inputs. But that's a much narrower, and safer, special case than before, since Vars will never appear in a raw parse tree, and they don't have any substructure to worry about. In passing fix some oversights in the patch that added CREATE INDEX IF NOT EXISTS (missing processing of IndexStmt.if_not_exists). These appear relatively harmless, but still sloppy coding practice.
2015-02-22 19:59:09 +01:00
index->if_not_exists = false;
/*
* We don't try to preserve the name of the source index; instead, just
* let DefineIndex() choose a reasonable name. (If we tried to preserve
* the name, we'd get duplicate-relation-name failures unless the source
* table was in a different schema.)
*/
index->idxname = NULL;
/*
* If the index is marked PRIMARY or has an exclusion condition, it's
* certainly from a constraint; else, if it's not marked UNIQUE, it
* certainly isn't. If it is or might be from a constraint, we have to
* fetch the pg_constraint record.
*/
if (index->primary || index->unique || idxrec->indisexclusion)
{
2010-02-26 03:01:40 +01:00
Oid constraintId = get_index_constraint(source_relid);
if (OidIsValid(constraintId))
{
HeapTuple ht_constr;
Form_pg_constraint conrec;
if (constraintOid)
*constraintOid = constraintId;
ht_constr = SearchSysCache1(CONSTROID,
2010-02-26 03:01:40 +01:00
ObjectIdGetDatum(constraintId));
if (!HeapTupleIsValid(ht_constr))
elog(ERROR, "cache lookup failed for constraint %u",
constraintId);
conrec = (Form_pg_constraint) GETSTRUCT(ht_constr);
index->isconstraint = true;
index->deferrable = conrec->condeferrable;
index->initdeferred = conrec->condeferred;
/* If it's an exclusion constraint, we need the operator names */
if (idxrec->indisexclusion)
{
2010-02-26 03:01:40 +01:00
Datum *elems;
int nElems;
int i;
Assert(conrec->contype == CONSTRAINT_EXCLUSION);
/* Extract operator OIDs from the pg_constraint tuple */
datum = SysCacheGetAttr(CONSTROID, ht_constr,
Anum_pg_constraint_conexclop,
&isnull);
if (isnull)
elog(ERROR, "null conexclop for constraint %u",
constraintId);
deconstruct_array(DatumGetArrayTypeP(datum),
OIDOID, sizeof(Oid), true, 'i',
&elems, NULL, &nElems);
for (i = 0; i < nElems; i++)
{
Oid operid = DatumGetObjectId(elems[i]);
HeapTuple opertup;
Form_pg_operator operform;
char *oprname;
char *nspname;
List *namelist;
opertup = SearchSysCache1(OPEROID,
ObjectIdGetDatum(operid));
if (!HeapTupleIsValid(opertup))
elog(ERROR, "cache lookup failed for operator %u",
operid);
operform = (Form_pg_operator) GETSTRUCT(opertup);
oprname = pstrdup(NameStr(operform->oprname));
/* For simplicity we always schema-qualify the op name */
nspname = get_namespace_name(operform->oprnamespace);
namelist = list_make2(makeString(nspname),
makeString(oprname));
index->excludeOpNames = lappend(index->excludeOpNames,
namelist);
ReleaseSysCache(opertup);
}
}
ReleaseSysCache(ht_constr);
}
else
index->isconstraint = false;
}
else
index->isconstraint = false;
/* Get the index expressions, if any */
datum = SysCacheGetAttr(INDEXRELID, ht_idx,
Anum_pg_index_indexprs, &isnull);
if (!isnull)
{
char *exprsString;
exprsString = TextDatumGetCString(datum);
indexprs = (List *) stringToNode(exprsString);
}
else
indexprs = NIL;
/* Build the list of IndexElem */
index->indexParams = NIL;
index->indexIncludingParams = NIL;
indexpr_item = list_head(indexprs);
for (keyno = 0; keyno < idxrec->indnkeyatts; keyno++)
{
2007-11-15 22:14:46 +01:00
IndexElem *iparam;
AttrNumber attnum = idxrec->indkey.values[keyno];
Form_pg_attribute attr = TupleDescAttr(RelationGetDescr(source_idx),
keyno);
int16 opt = source_idx->rd_indoption[keyno];
iparam = makeNode(IndexElem);
if (AttributeNumberIsValid(attnum))
{
/* Simple index column */
char *attname;
attname = get_attname(indrelid, attnum, false);
keycoltype = get_atttype(indrelid, attnum);
iparam->name = attname;
iparam->expr = NULL;
}
else
{
/* Expressional index */
Node *indexkey;
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
bool found_whole_row;
if (indexpr_item == NULL)
elog(ERROR, "too few entries in indexprs list");
indexkey = (Node *) lfirst(indexpr_item);
indexpr_item = lnext(indexpr_item);
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
/* Adjust Vars to match new table's column numbering */
indexkey = map_variable_attnos(indexkey,
1, 0,
attmap, attmap_length,
InvalidOid, &found_whole_row);
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
/* As in transformTableLikeClause, reject whole-row variables */
if (found_whole_row)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot convert whole-row table reference"),
errdetail("Index \"%s\" contains a whole-row table reference.",
RelationGetRelationName(source_idx))));
iparam->name = NULL;
iparam->expr = indexkey;
keycoltype = exprType(indexkey);
}
/* Copy the original index column name */
iparam->indexcolname = pstrdup(NameStr(attr->attname));
/* Add the collation name, if non-default */
iparam->collation = get_collation(indcollation->values[keyno], keycoltype);
/* Add the operator class name, if non-default */
iparam->opclass = get_opclass(indclass->values[keyno], keycoltype);
iparam->ordering = SORTBY_DEFAULT;
iparam->nulls_ordering = SORTBY_NULLS_DEFAULT;
/* Adjust options if necessary */
if (source_idx->rd_indam->amcanorder)
{
/*
* If it supports sort ordering, copy DESC and NULLS opts. Don't
* set non-default settings unnecessarily, though, so as to
* improve the chance of recognizing equivalence to constraint
* indexes.
*/
if (opt & INDOPTION_DESC)
{
iparam->ordering = SORTBY_DESC;
if ((opt & INDOPTION_NULLS_FIRST) == 0)
iparam->nulls_ordering = SORTBY_NULLS_LAST;
}
else
{
if (opt & INDOPTION_NULLS_FIRST)
iparam->nulls_ordering = SORTBY_NULLS_FIRST;
}
}
index->indexParams = lappend(index->indexParams, iparam);
}
/* Handle included columns separately */
for (keyno = idxrec->indnkeyatts; keyno < idxrec->indnatts; keyno++)
{
IndexElem *iparam;
AttrNumber attnum = idxrec->indkey.values[keyno];
Form_pg_attribute attr = TupleDescAttr(RelationGetDescr(source_idx),
keyno);
iparam = makeNode(IndexElem);
if (AttributeNumberIsValid(attnum))
{
/* Simple index column */
char *attname;
attname = get_attname(indrelid, attnum, false);
keycoltype = get_atttype(indrelid, attnum);
iparam->name = attname;
iparam->expr = NULL;
}
else
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("expressions are not supported in included columns")));
/* Copy the original index column name */
iparam->indexcolname = pstrdup(NameStr(attr->attname));
index->indexIncludingParams = lappend(index->indexIncludingParams, iparam);
}
/* Copy reloptions if any */
datum = SysCacheGetAttr(RELOID, ht_idxrel,
Anum_pg_class_reloptions, &isnull);
if (!isnull)
index->options = untransformRelOptions(datum);
/* If it's a partial index, decompile and append the predicate */
datum = SysCacheGetAttr(INDEXRELID, ht_idx,
Anum_pg_index_indpred, &isnull);
if (!isnull)
{
char *pred_str;
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
Node *pred_tree;
bool found_whole_row;
/* Convert text string to node tree */
pred_str = TextDatumGetCString(datum);
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
pred_tree = (Node *) stringToNode(pred_str);
/* Adjust Vars to match new table's column numbering */
pred_tree = map_variable_attnos(pred_tree,
1, 0,
attmap, attmap_length,
InvalidOid, &found_whole_row);
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
/* As in transformTableLikeClause, reject whole-row variables */
if (found_whole_row)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot convert whole-row table reference"),
errdetail("Index \"%s\" contains a whole-row table reference.",
RelationGetRelationName(source_idx))));
Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars. If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
2012-06-30 22:43:50 +02:00
index->whereClause = pred_tree;
}
/* Clean up */
ReleaseSysCache(ht_idxrel);
ReleaseSysCache(ht_am);
return index;
}
/*
* Generate a CreateStatsStmt node using information from an already existing
* extended statistic "source_statsid", for the rel identified by heapRel and
* heapRelid.
*/
static CreateStatsStmt *
generateClonedExtStatsStmt(RangeVar *heapRel, Oid heapRelid,
Oid source_statsid)
{
HeapTuple ht_stats;
Form_pg_statistic_ext statsrec;
CreateStatsStmt *stats;
List *stat_types = NIL;
List *def_names = NIL;
bool isnull;
Datum datum;
ArrayType *arr;
char *enabled;
int i;
Assert(OidIsValid(heapRelid));
Assert(heapRel != NULL);
/*
* Fetch pg_statistic_ext tuple of source statistics object.
*/
ht_stats = SearchSysCache1(STATEXTOID, ObjectIdGetDatum(source_statsid));
if (!HeapTupleIsValid(ht_stats))
elog(ERROR, "cache lookup failed for statistics object %u", source_statsid);
statsrec = (Form_pg_statistic_ext) GETSTRUCT(ht_stats);
/* Determine which statistics types exist */
datum = SysCacheGetAttr(STATEXTOID, ht_stats,
Anum_pg_statistic_ext_stxkind, &isnull);
Assert(!isnull);
arr = DatumGetArrayTypeP(datum);
if (ARR_NDIM(arr) != 1 ||
ARR_HASNULL(arr) ||
ARR_ELEMTYPE(arr) != CHAROID)
elog(ERROR, "stxkind is not a 1-D char array");
enabled = (char *) ARR_DATA_PTR(arr);
for (i = 0; i < ARR_DIMS(arr)[0]; i++)
{
if (enabled[i] == STATS_EXT_NDISTINCT)
stat_types = lappend(stat_types, makeString("ndistinct"));
else if (enabled[i] == STATS_EXT_DEPENDENCIES)
stat_types = lappend(stat_types, makeString("dependencies"));
else
elog(ERROR, "unrecognized statistics kind %c", enabled[i]);
}
/* Determine which columns the statistics are on */
for (i = 0; i < statsrec->stxkeys.dim1; i++)
{
ColumnRef *cref = makeNode(ColumnRef);
AttrNumber attnum = statsrec->stxkeys.values[i];
cref->fields = list_make1(makeString(get_attname(heapRelid,
attnum, false)));
cref->location = -1;
def_names = lappend(def_names, cref);
}
/* finally, build the output node */
stats = makeNode(CreateStatsStmt);
stats->defnames = NULL;
stats->stat_types = stat_types;
stats->exprs = def_names;
stats->relations = list_make1(heapRel);
stats->stxcomment = NULL;
stats->if_not_exists = false;
/* Clean up */
ReleaseSysCache(ht_stats);
return stats;
}
/*
* get_collation - fetch qualified name of a collation
*
* If collation is InvalidOid or is the default for the given actual_datatype,
* then the return value is NIL.
*/
static List *
get_collation(Oid collation, Oid actual_datatype)
{
List *result;
HeapTuple ht_coll;
Form_pg_collation coll_rec;
char *nsp_name;
char *coll_name;
if (!OidIsValid(collation))
return NIL; /* easy case */
if (collation == get_typcollation(actual_datatype))
return NIL; /* just let it default */
ht_coll = SearchSysCache1(COLLOID, ObjectIdGetDatum(collation));
if (!HeapTupleIsValid(ht_coll))
elog(ERROR, "cache lookup failed for collation %u", collation);
coll_rec = (Form_pg_collation) GETSTRUCT(ht_coll);
/* For simplicity, we always schema-qualify the name */
nsp_name = get_namespace_name(coll_rec->collnamespace);
coll_name = pstrdup(NameStr(coll_rec->collname));
result = list_make2(makeString(nsp_name), makeString(coll_name));
ReleaseSysCache(ht_coll);
return result;
}
/*
* get_opclass - fetch qualified name of an index operator class
*
* If the opclass is the default for the given actual_datatype, then
* the return value is NIL.
*/
static List *
get_opclass(Oid opclass, Oid actual_datatype)
{
List *result = NIL;
2007-11-15 22:14:46 +01:00
HeapTuple ht_opc;
Form_pg_opclass opc_rec;
ht_opc = SearchSysCache1(CLAOID, ObjectIdGetDatum(opclass));
if (!HeapTupleIsValid(ht_opc))
elog(ERROR, "cache lookup failed for opclass %u", opclass);
opc_rec = (Form_pg_opclass) GETSTRUCT(ht_opc);
if (GetDefaultOpClass(actual_datatype, opc_rec->opcmethod) != opclass)
{
/* For simplicity, we always schema-qualify the name */
2007-11-15 22:14:46 +01:00
char *nsp_name = get_namespace_name(opc_rec->opcnamespace);
char *opc_name = pstrdup(NameStr(opc_rec->opcname));
result = list_make2(makeString(nsp_name), makeString(opc_name));
}
ReleaseSysCache(ht_opc);
return result;
}
/*
* transformIndexConstraints
* Handle UNIQUE, PRIMARY KEY, EXCLUDE constraints, which create indexes.
* We also merge in any index definitions arising from
* LIKE ... INCLUDING INDEXES.
*/
static void
transformIndexConstraints(CreateStmtContext *cxt)
{
IndexStmt *index;
List *indexlist = NIL;
ListCell *lc;
/*
* Run through the constraints that need to generate an index. For PRIMARY
* KEY, mark each column as NOT NULL and create an index. For UNIQUE or
* EXCLUDE, create an index as for PRIMARY KEY, but do not insist on NOT
* NULL.
*/
foreach(lc, cxt->ixconstraints)
{
Constraint *constraint = lfirst_node(Constraint, lc);
Assert(constraint->contype == CONSTR_PRIMARY ||
constraint->contype == CONSTR_UNIQUE ||
constraint->contype == CONSTR_EXCLUSION);
index = transformIndexConstraint(constraint, cxt);
indexlist = lappend(indexlist, index);
}
/* Add in any indexes defined by LIKE ... INCLUDING INDEXES */
foreach(lc, cxt->inh_indexes)
{
index = (IndexStmt *) lfirst(lc);
if (index->primary)
{
if (cxt->pkey != NULL)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
errmsg("multiple primary keys for table \"%s\" are not allowed",
cxt->relation->relname)));
cxt->pkey = index;
}
indexlist = lappend(indexlist, index);
}
/*
* Scan the index list and remove any redundant index specifications. This
* can happen if, for instance, the user writes UNIQUE PRIMARY KEY. A
* strict reading of SQL would suggest raising an error instead, but that
* strikes me as too anal-retentive. - tgl 2001-02-14
*
* XXX in ALTER TABLE case, it'd be nice to look for duplicate
* pre-existing indexes, too.
*/
Assert(cxt->alist == NIL);
if (cxt->pkey != NULL)
{
/* Make sure we keep the PKEY index in preference to others... */
cxt->alist = list_make1(cxt->pkey);
}
foreach(lc, indexlist)
{
bool keep = true;
ListCell *k;
index = lfirst(lc);
/* if it's pkey, it's already in cxt->alist */
if (index == cxt->pkey)
continue;
foreach(k, cxt->alist)
{
IndexStmt *priorindex = lfirst(k);
if (equal(index->indexParams, priorindex->indexParams) &&
equal(index->indexIncludingParams, priorindex->indexIncludingParams) &&
equal(index->whereClause, priorindex->whereClause) &&
equal(index->excludeOpNames, priorindex->excludeOpNames) &&
strcmp(index->accessMethod, priorindex->accessMethod) == 0 &&
index->deferrable == priorindex->deferrable &&
index->initdeferred == priorindex->initdeferred)
{
priorindex->unique |= index->unique;
/*
* If the prior index is as yet unnamed, and this one is
* named, then transfer the name to the prior index. This
* ensures that if we have named and unnamed constraints,
* we'll use (at least one of) the names for the index.
*/
if (priorindex->idxname == NULL)
priorindex->idxname = index->idxname;
keep = false;
break;
}
}
if (keep)
cxt->alist = lappend(cxt->alist, index);
}
}
/*
* transformIndexConstraint
* Transform one UNIQUE, PRIMARY KEY, or EXCLUDE constraint for
* transformIndexConstraints.
*/
static IndexStmt *
transformIndexConstraint(Constraint *constraint, CreateStmtContext *cxt)
{
2007-11-15 22:14:46 +01:00
IndexStmt *index;
ListCell *lc;
index = makeNode(IndexStmt);
index->unique = (constraint->contype != CONSTR_EXCLUSION);
index->primary = (constraint->contype == CONSTR_PRIMARY);
if (index->primary)
{
if (cxt->pkey != NULL)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
errmsg("multiple primary keys for table \"%s\" are not allowed",
cxt->relation->relname),
parser_errposition(cxt->pstate, constraint->location)));
cxt->pkey = index;
/*
* In ALTER TABLE case, a primary index might already exist, but
* DefineIndex will check for it.
*/
}
index->isconstraint = true;
index->deferrable = constraint->deferrable;
index->initdeferred = constraint->initdeferred;
if (constraint->conname != NULL)
index->idxname = pstrdup(constraint->conname);
else
2007-11-15 22:14:46 +01:00
index->idxname = NULL; /* DefineIndex will choose name */
index->relation = cxt->relation;
index->accessMethod = constraint->access_method ? constraint->access_method : DEFAULT_INDEX_TYPE;
index->options = constraint->options;
index->tableSpace = constraint->indexspace;
index->whereClause = constraint->where_clause;
index->indexParams = NIL;
index->indexIncludingParams = NIL;
index->excludeOpNames = NIL;
index->idxcomment = NULL;
index->indexOid = InvalidOid;
index->oldNode = InvalidOid;
Get rid of multiple applications of transformExpr() to the same tree. transformExpr() has for many years had provisions to do nothing when applied to an already-transformed expression tree. However, this was always ugly and of dubious reliability, so we'd be much better off without it. The primary historical reason for it was that gram.y sometimes returned multiple links to the same subexpression, which is no longer true as of my BETWEEN fixes. We'd also grown some lazy hacks in CREATE TABLE LIKE (failing to distinguish between raw and already-transformed index specifications) and one or two other places. This patch removes the need for and support for re-transforming already transformed expressions. The index case is dealt with by adding a flag to struct IndexStmt to indicate that it's already been transformed; which has some benefit anyway in that tablecmds.c can now Assert that transformation has happened rather than just assuming. The other main reason was some rather sloppy code for array type coercion, which can be fixed (and its performance improved too) by refactoring. I did leave transformJoinUsingClause() still constructing expressions containing untransformed operator nodes being applied to Vars, so that transformExpr() still has to allow Var inputs. But that's a much narrower, and safer, special case than before, since Vars will never appear in a raw parse tree, and they don't have any substructure to worry about. In passing fix some oversights in the patch that added CREATE INDEX IF NOT EXISTS (missing processing of IndexStmt.if_not_exists). These appear relatively harmless, but still sloppy coding practice.
2015-02-22 19:59:09 +01:00
index->transformed = false;
index->concurrent = false;
Get rid of multiple applications of transformExpr() to the same tree. transformExpr() has for many years had provisions to do nothing when applied to an already-transformed expression tree. However, this was always ugly and of dubious reliability, so we'd be much better off without it. The primary historical reason for it was that gram.y sometimes returned multiple links to the same subexpression, which is no longer true as of my BETWEEN fixes. We'd also grown some lazy hacks in CREATE TABLE LIKE (failing to distinguish between raw and already-transformed index specifications) and one or two other places. This patch removes the need for and support for re-transforming already transformed expressions. The index case is dealt with by adding a flag to struct IndexStmt to indicate that it's already been transformed; which has some benefit anyway in that tablecmds.c can now Assert that transformation has happened rather than just assuming. The other main reason was some rather sloppy code for array type coercion, which can be fixed (and its performance improved too) by refactoring. I did leave transformJoinUsingClause() still constructing expressions containing untransformed operator nodes being applied to Vars, so that transformExpr() still has to allow Var inputs. But that's a much narrower, and safer, special case than before, since Vars will never appear in a raw parse tree, and they don't have any substructure to worry about. In passing fix some oversights in the patch that added CREATE INDEX IF NOT EXISTS (missing processing of IndexStmt.if_not_exists). These appear relatively harmless, but still sloppy coding practice.
2015-02-22 19:59:09 +01:00
index->if_not_exists = false;
/*
* If it's ALTER TABLE ADD CONSTRAINT USING INDEX, look up the index and
* verify it's usable, then extract the implied column name list. (We
2011-04-10 17:42:00 +02:00
* will not actually need the column name list at runtime, but we need it
* now to check for duplicate column entries below.)
*/
if (constraint->indexname != NULL)
{
char *index_name = constraint->indexname;
Relation heap_rel = cxt->rel;
Oid index_oid;
Relation index_rel;
Form_pg_index index_form;
oidvector *indclass;
Datum indclassDatum;
bool isnull;
int i;
/* Grammar should not allow this with explicit column list */
Assert(constraint->keys == NIL);
/* Grammar should only allow PRIMARY and UNIQUE constraints */
Assert(constraint->contype == CONSTR_PRIMARY ||
constraint->contype == CONSTR_UNIQUE);
/* Must be ALTER, not CREATE, but grammar doesn't enforce that */
if (!cxt->isalter)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot use an existing index in CREATE TABLE"),
parser_errposition(cxt->pstate, constraint->location)));
/* Look for the index in the same schema as the table */
index_oid = get_relname_relid(index_name, RelationGetNamespace(heap_rel));
if (!OidIsValid(index_oid))
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg("index \"%s\" does not exist", index_name),
parser_errposition(cxt->pstate, constraint->location)));
/* Open the index (this will throw an error if it is not an index) */
index_rel = index_open(index_oid, AccessShareLock);
index_form = index_rel->rd_index;
/* Check that it does not have an associated constraint already */
if (OidIsValid(get_index_constraint(index_oid)))
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("index \"%s\" is already associated with a constraint",
index_name),
parser_errposition(cxt->pstate, constraint->location)));
/* Perform validity checks on the index */
if (index_form->indrelid != RelationGetRelid(heap_rel))
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("index \"%s\" does not belong to table \"%s\"",
index_name, RelationGetRelationName(heap_rel)),
parser_errposition(cxt->pstate, constraint->location)));
if (!index_form->indisvalid)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("index \"%s\" is not valid", index_name),
parser_errposition(cxt->pstate, constraint->location)));
if (!index_form->indisunique)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("\"%s\" is not a unique index", index_name),
errdetail("Cannot create a primary key or unique constraint using such an index."),
parser_errposition(cxt->pstate, constraint->location)));
if (RelationGetIndexExpressions(index_rel) != NIL)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("index \"%s\" contains expressions", index_name),
errdetail("Cannot create a primary key or unique constraint using such an index."),
parser_errposition(cxt->pstate, constraint->location)));
if (RelationGetIndexPredicate(index_rel) != NIL)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("\"%s\" is a partial index", index_name),
errdetail("Cannot create a primary key or unique constraint using such an index."),
parser_errposition(cxt->pstate, constraint->location)));
/*
2011-04-10 17:42:00 +02:00
* It's probably unsafe to change a deferred index to non-deferred. (A
* non-constraint index couldn't be deferred anyway, so this case
* should never occur; no need to sweat, but let's check it.)
*/
if (!index_form->indimmediate && !constraint->deferrable)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("\"%s\" is a deferrable index", index_name),
errdetail("Cannot create a non-deferrable constraint using a deferrable index."),
parser_errposition(cxt->pstate, constraint->location)));
/*
* Insist on it being a btree. That's the only kind that supports
* uniqueness at the moment anyway; but we must have an index that
* exactly matches what you'd get from plain ADD CONSTRAINT syntax,
* else dump and reload will produce a different index (breaking
* pg_upgrade in particular).
*/
if (index_rel->rd_rel->relam != get_index_am_oid(DEFAULT_INDEX_TYPE, false))
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
2011-07-04 23:01:35 +02:00
errmsg("index \"%s\" is not a btree", index_name),
parser_errposition(cxt->pstate, constraint->location)));
/* Must get indclass the hard way */
indclassDatum = SysCacheGetAttr(INDEXRELID, index_rel->rd_indextuple,
Anum_pg_index_indclass, &isnull);
Assert(!isnull);
indclass = (oidvector *) DatumGetPointer(indclassDatum);
for (i = 0; i < index_form->indnatts; i++)
{
int16 attnum = index_form->indkey.values[i];
const FormData_pg_attribute *attform;
2011-04-10 17:42:00 +02:00
char *attname;
Oid defopclass;
/*
* We shouldn't see attnum == 0 here, since we already rejected
* expression indexes. If we do, SystemAttributeDefinition will
2011-04-10 17:42:00 +02:00
* throw an error.
*/
if (attnum > 0)
{
Assert(attnum <= heap_rel->rd_att->natts);
attform = TupleDescAttr(heap_rel->rd_att, attnum - 1);
}
else
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
attform = SystemAttributeDefinition(attnum);
attname = pstrdup(NameStr(attform->attname));
if (i < index_form->indnkeyatts)
{
/*
* Insist on default opclass and sort options. While the
* index would still work as a constraint with non-default
* settings, it might not provide exactly the same uniqueness
* semantics as you'd get from a normally-created constraint;
* and there's also the dump/reload problem mentioned above.
*/
defopclass = GetDefaultOpClass(attform->atttypid,
index_rel->rd_rel->relam);
if (indclass->values[i] != defopclass ||
index_rel->rd_indoption[i] != 0)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
2018-05-18 23:17:57 +02:00
errmsg("index \"%s\" column number %d does not have default sorting behavior", index_name, i + 1),
errdetail("Cannot create a primary key or unique constraint using such an index."),
parser_errposition(cxt->pstate, constraint->location)));
constraint->keys = lappend(constraint->keys, makeString(attname));
}
else
constraint->including = lappend(constraint->including, makeString(attname));
}
/* Close the index relation but keep the lock */
relation_close(index_rel, NoLock);
index->indexOid = index_oid;
}
/*
2010-02-26 03:01:40 +01:00
* If it's an EXCLUDE constraint, the grammar returns a list of pairs of
* IndexElems and operator names. We have to break that apart into
* separate lists.
*/
if (constraint->contype == CONSTR_EXCLUSION)
{
foreach(lc, constraint->exclusions)
{
2010-02-26 03:01:40 +01:00
List *pair = (List *) lfirst(lc);
IndexElem *elem;
List *opname;
Assert(list_length(pair) == 2);
elem = linitial_node(IndexElem, pair);
opname = lsecond_node(List, pair);
index->indexParams = lappend(index->indexParams, elem);
index->excludeOpNames = lappend(index->excludeOpNames, opname);
}
}
/*
* For UNIQUE and PRIMARY KEY, we just have a list of column names.
*
2007-11-15 22:14:46 +01:00
* Make sure referenced keys exist. If we are making a PRIMARY KEY index,
* also make sure they are NOT NULL, if possible. (Although we could leave
* it to DefineIndex to mark the columns NOT NULL, it's more efficient to
* get it right the first time.)
*/
else
{
foreach(lc, constraint->keys)
{
char *key = strVal(lfirst(lc));
bool found = false;
ColumnDef *column = NULL;
ListCell *columns;
IndexElem *iparam;
/* Make sure referenced column exist. */
foreach(columns, cxt->columns)
{
column = castNode(ColumnDef, lfirst(columns));
if (strcmp(column->colname, key) == 0)
{
found = true;
break;
}
}
if (found)
{
/* found column in the new table; force it to be NOT NULL */
if (constraint->contype == CONSTR_PRIMARY)
column->is_not_null = true;
}
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
else if (SystemAttributeByName(key) != NULL)
{
/*
* column will be a system column in the new table, so accept
* it. System columns can't ever be null, so no need to worry
* about PRIMARY/NOT NULL constraint.
*/
found = true;
}
else if (cxt->inhRelations)
{
/* try inherited tables */
ListCell *inher;
foreach(inher, cxt->inhRelations)
{
RangeVar *inh = castNode(RangeVar, lfirst(inher));
Relation rel;
int count;
rel = table_openrv(inh, AccessShareLock);
/* check user requested inheritance from valid relkind */
if (rel->rd_rel->relkind != RELKIND_RELATION &&
rel->rd_rel->relkind != RELKIND_FOREIGN_TABLE &&
rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("inherited relation \"%s\" is not a table or foreign table",
inh->relname)));
for (count = 0; count < rel->rd_att->natts; count++)
{
Form_pg_attribute inhattr = TupleDescAttr(rel->rd_att,
count);
char *inhname = NameStr(inhattr->attname);
if (inhattr->attisdropped)
continue;
if (strcmp(key, inhname) == 0)
{
found = true;
/*
* We currently have no easy way to force an
* inherited column to be NOT NULL at creation, if
* its parent wasn't so already. We leave it to
* DefineIndex to fix things up in this case.
*/
break;
}
}
table_close(rel, NoLock);
if (found)
break;
}
}
/*
* In the ALTER TABLE case, don't complain about index keys not
* created in the command; they may well exist already.
* DefineIndex will complain about them if not, and will also take
* care of marking them NOT NULL.
*/
if (!found && !cxt->isalter)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_COLUMN),
errmsg("column \"%s\" named in key does not exist", key),
parser_errposition(cxt->pstate, constraint->location)));
/* Check for PRIMARY KEY(foo, foo) */
foreach(columns, index->indexParams)
{
iparam = (IndexElem *) lfirst(columns);
if (iparam->name && strcmp(key, iparam->name) == 0)
{
if (index->primary)
ereport(ERROR,
(errcode(ERRCODE_DUPLICATE_COLUMN),
errmsg("column \"%s\" appears twice in primary key constraint",
key),
parser_errposition(cxt->pstate, constraint->location)));
else
ereport(ERROR,
(errcode(ERRCODE_DUPLICATE_COLUMN),
errmsg("column \"%s\" appears twice in unique constraint",
key),
parser_errposition(cxt->pstate, constraint->location)));
}
}
/* OK, add it to the index definition */
iparam = makeNode(IndexElem);
iparam->name = pstrdup(key);
iparam->expr = NULL;
iparam->indexcolname = NULL;
iparam->collation = NIL;
iparam->opclass = NIL;
iparam->ordering = SORTBY_DEFAULT;
iparam->nulls_ordering = SORTBY_NULLS_DEFAULT;
index->indexParams = lappend(index->indexParams, iparam);
}
}
/* Add included columns to index definition */
foreach(lc, constraint->including)
{
char *key = strVal(lfirst(lc));
bool found = false;
ColumnDef *column = NULL;
ListCell *columns;
IndexElem *iparam;
foreach(columns, cxt->columns)
{
column = lfirst_node(ColumnDef, columns);
if (strcmp(column->colname, key) == 0)
{
found = true;
break;
}
}
if (!found)
{
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
if (SystemAttributeByName(key) != NULL)
{
/*
* column will be a system column in the new table, so accept
* it. System columns can't ever be null, so no need to worry
* about PRIMARY/NOT NULL constraint.
*/
found = true;
}
else if (cxt->inhRelations)
{
/* try inherited tables */
ListCell *inher;
foreach(inher, cxt->inhRelations)
{
RangeVar *inh = lfirst_node(RangeVar, inher);
Relation rel;
int count;
rel = table_openrv(inh, AccessShareLock);
/* check user requested inheritance from valid relkind */
if (rel->rd_rel->relkind != RELKIND_RELATION &&
rel->rd_rel->relkind != RELKIND_FOREIGN_TABLE &&
rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("inherited relation \"%s\" is not a table or foreign table",
inh->relname)));
for (count = 0; count < rel->rd_att->natts; count++)
{
Form_pg_attribute inhattr = TupleDescAttr(rel->rd_att,
count);
char *inhname = NameStr(inhattr->attname);
if (inhattr->attisdropped)
continue;
if (strcmp(key, inhname) == 0)
{
found = true;
/*
* We currently have no easy way to force an
* inherited column to be NOT NULL at creation, if
* its parent wasn't so already. We leave it to
* DefineIndex to fix things up in this case.
*/
break;
}
}
table_close(rel, NoLock);
if (found)
break;
}
}
}
/*
* In the ALTER TABLE case, don't complain about index keys not
2007-11-15 22:14:46 +01:00
* created in the command; they may well exist already. DefineIndex
* will complain about them if not, and will also take care of marking
* them NOT NULL.
*/
if (!found && !cxt->isalter)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_COLUMN),
errmsg("column \"%s\" named in key does not exist", key),
parser_errposition(cxt->pstate, constraint->location)));
/* OK, add it to the index definition */
iparam = makeNode(IndexElem);
iparam->name = pstrdup(key);
iparam->expr = NULL;
iparam->indexcolname = NULL;
iparam->collation = NIL;
iparam->opclass = NIL;
index->indexIncludingParams = lappend(index->indexIncludingParams, iparam);
}
return index;
}
/*
* transformExtendedStatistics
* Handle extended statistic objects
*
* Right now, there's nothing to do here, so we just append the list to
* the existing "after" list.
*/
static void
transformExtendedStatistics(CreateStmtContext *cxt)
{
cxt->alist = list_concat(cxt->alist, cxt->extstats);
}
/*
* transformCheckConstraints
* handle CHECK constraints
*
* Right now, there's nothing to do here when called from ALTER TABLE,
* but the other constraint-transformation functions are called in both
* the CREATE TABLE and ALTER TABLE paths, so do the same here, and just
* don't do anything if we're not authorized to skip validation.
*/
static void
transformCheckConstraints(CreateStmtContext *cxt, bool skipValidation)
{
ListCell *ckclist;
if (cxt->ckconstraints == NIL)
return;
/*
* If creating a new table (but not a foreign table), we can safely skip
2017-08-14 23:29:33 +02:00
* validation of check constraints, and nonetheless mark them valid. (This
* will override any user-supplied NOT VALID flag.)
*/
if (skipValidation)
{
foreach(ckclist, cxt->ckconstraints)
{
Constraint *constraint = (Constraint *) lfirst(ckclist);
constraint->skip_validation = true;
constraint->initially_valid = true;
}
}
}
/*
* transformFKConstraints
* handle FOREIGN KEY constraints
*/
static void
transformFKConstraints(CreateStmtContext *cxt,
bool skipValidation, bool isAddConstraint)
{
ListCell *fkclist;
if (cxt->fkconstraints == NIL)
return;
/*
* If CREATE TABLE or adding a column with NULL default, we can safely
* skip validation of FK constraints, and nonetheless mark them valid.
* (This will override any user-supplied NOT VALID flag.)
*/
if (skipValidation)
{
foreach(fkclist, cxt->fkconstraints)
{
Constraint *constraint = (Constraint *) lfirst(fkclist);
constraint->skip_validation = true;
2011-04-10 17:42:00 +02:00
constraint->initially_valid = true;
}
}
/*
* For CREATE TABLE or ALTER TABLE ADD COLUMN, gin up an ALTER TABLE ADD
* CONSTRAINT command to execute after the basic command is complete. (If
* called from ADD CONSTRAINT, that routine will add the FK constraints to
* its own subcommand list.)
*
* Note: the ADD CONSTRAINT command must also execute after any index
* creation commands. Thus, this should run after
* transformIndexConstraints, so that the CREATE INDEX commands are
* already in cxt->alist.
*/
if (!isAddConstraint)
{
AlterTableStmt *alterstmt = makeNode(AlterTableStmt);
alterstmt->relation = cxt->relation;
alterstmt->cmds = NIL;
alterstmt->relkind = OBJECT_TABLE;
foreach(fkclist, cxt->fkconstraints)
{
Constraint *constraint = (Constraint *) lfirst(fkclist);
AlterTableCmd *altercmd = makeNode(AlterTableCmd);
altercmd->subtype = AT_ProcessedConstraint;
altercmd->name = NULL;
altercmd->def = (Node *) constraint;
alterstmt->cmds = lappend(alterstmt->cmds, altercmd);
}
cxt->alist = lappend(cxt->alist, alterstmt);
}
}
/*
* transformIndexStmt - parse analysis for CREATE INDEX and ALTER TABLE
*
* Note: this is a no-op for an index not using either index expressions or
* a predicate expression. There are several code paths that create indexes
* without bothering to call this, because they know they don't have any
* such expressions to deal with.
*
* To avoid race conditions, it's important that this function rely only on
* the passed-in relid (and not on stmt->relation) to determine the target
* relation.
*/
IndexStmt *
transformIndexStmt(Oid relid, IndexStmt *stmt, const char *queryString)
{
ParseState *pstate;
RangeTblEntry *rte;
ListCell *l;
Relation rel;
Get rid of multiple applications of transformExpr() to the same tree. transformExpr() has for many years had provisions to do nothing when applied to an already-transformed expression tree. However, this was always ugly and of dubious reliability, so we'd be much better off without it. The primary historical reason for it was that gram.y sometimes returned multiple links to the same subexpression, which is no longer true as of my BETWEEN fixes. We'd also grown some lazy hacks in CREATE TABLE LIKE (failing to distinguish between raw and already-transformed index specifications) and one or two other places. This patch removes the need for and support for re-transforming already transformed expressions. The index case is dealt with by adding a flag to struct IndexStmt to indicate that it's already been transformed; which has some benefit anyway in that tablecmds.c can now Assert that transformation has happened rather than just assuming. The other main reason was some rather sloppy code for array type coercion, which can be fixed (and its performance improved too) by refactoring. I did leave transformJoinUsingClause() still constructing expressions containing untransformed operator nodes being applied to Vars, so that transformExpr() still has to allow Var inputs. But that's a much narrower, and safer, special case than before, since Vars will never appear in a raw parse tree, and they don't have any substructure to worry about. In passing fix some oversights in the patch that added CREATE INDEX IF NOT EXISTS (missing processing of IndexStmt.if_not_exists). These appear relatively harmless, but still sloppy coding practice.
2015-02-22 19:59:09 +01:00
/* Nothing to do if statement already transformed. */
if (stmt->transformed)
return stmt;
/*
2007-11-15 22:14:46 +01:00
* We must not scribble on the passed-in IndexStmt, so copy it. (This is
* overkill, but easy.)
*/
stmt = copyObject(stmt);
/* Set up pstate */
pstate = make_parsestate(NULL);
pstate->p_sourcetext = queryString;
/*
2007-11-15 22:14:46 +01:00
* Put the parent table into the rtable so that the expressions can refer
* to its fields without qualification. Caller is responsible for locking
* relation, but we still need to open it.
*/
rel = relation_open(relid, NoLock);
rte = addRangeTableEntryForRelation(pstate, rel,
AccessShareLock,
NULL, false, true);
/* no to join list, yes to namespaces */
addRTEtoQuery(pstate, rte, false, true, true);
/* take care of the where clause */
if (stmt->whereClause)
{
stmt->whereClause = transformWhereClause(pstate,
stmt->whereClause,
EXPR_KIND_INDEX_PREDICATE,
"WHERE");
/* we have to fix its collations too */
assign_expr_collations(pstate, stmt->whereClause);
}
/* take care of any index expressions */
foreach(l, stmt->indexParams)
{
IndexElem *ielem = (IndexElem *) lfirst(l);
if (ielem->expr)
{
/* Extract preliminary index col name before transforming expr */
if (ielem->indexcolname == NULL)
ielem->indexcolname = FigureIndexColname(ielem->expr);
/* Now do parse transformation of the expression */
ielem->expr = transformExpr(pstate, ielem->expr,
EXPR_KIND_INDEX_EXPRESSION);
/* We have to fix its collations too */
assign_expr_collations(pstate, ielem->expr);
/*
* transformExpr() should have already rejected subqueries,
Improve parser's and planner's handling of set-returning functions. Teach the parser to reject misplaced set-returning functions during parse analysis using p_expr_kind, in much the same way as we do for aggregates and window functions (cf commit eaccfded9). While this isn't complete (it misses nesting-based restrictions), it's much better than the previous error reporting for such cases, and it allows elimination of assorted ad-hoc expression_returns_set() error checks. We could add nesting checks later if it seems important to catch all cases at parse time. There is one case the parser will now throw error for although previous versions allowed it, which is SRFs in the tlist of an UPDATE. That never behaved sensibly (since it's ill-defined which generated row should be used to perform the update) and it's hard to see why it should not be treated as an error. It's a release-note-worthy change though. Also, add a new Query field hasTargetSRFs reporting whether there are any SRFs in the targetlist (including GROUP BY/ORDER BY expressions). The parser can now set that basically for free during parse analysis, and we can use it in a number of places to avoid expression_returns_set searches. (There will be more such checks soon.) In some places, this allows decontorting the logic since it's no longer expensive to check for SRFs in the tlist --- so I made the checks parallel to the handling of hasAggs/hasWindowFuncs wherever it seemed appropriate. catversion bump because adding a Query field changes stored rules. Andres Freund and Tom Lane Discussion: <24639.1473782855@sss.pgh.pa.us>
2016-09-13 19:54:24 +02:00
* aggregates, window functions, and SRFs, based on the EXPR_KIND_
* for an index expression.
*
* DefineIndex() will make more checks.
*/
}
}
/*
* Check that only the base rel is mentioned. (This should be dead code
* now that add_missing_from is history.)
*/
if (list_length(pstate->p_rtable) != 1)
ereport(ERROR,
(errcode(ERRCODE_INVALID_COLUMN_REFERENCE),
errmsg("index expressions and predicates can refer only to the table being indexed")));
free_parsestate(pstate);
/* Close relation */
table_close(rel, NoLock);
Get rid of multiple applications of transformExpr() to the same tree. transformExpr() has for many years had provisions to do nothing when applied to an already-transformed expression tree. However, this was always ugly and of dubious reliability, so we'd be much better off without it. The primary historical reason for it was that gram.y sometimes returned multiple links to the same subexpression, which is no longer true as of my BETWEEN fixes. We'd also grown some lazy hacks in CREATE TABLE LIKE (failing to distinguish between raw and already-transformed index specifications) and one or two other places. This patch removes the need for and support for re-transforming already transformed expressions. The index case is dealt with by adding a flag to struct IndexStmt to indicate that it's already been transformed; which has some benefit anyway in that tablecmds.c can now Assert that transformation has happened rather than just assuming. The other main reason was some rather sloppy code for array type coercion, which can be fixed (and its performance improved too) by refactoring. I did leave transformJoinUsingClause() still constructing expressions containing untransformed operator nodes being applied to Vars, so that transformExpr() still has to allow Var inputs. But that's a much narrower, and safer, special case than before, since Vars will never appear in a raw parse tree, and they don't have any substructure to worry about. In passing fix some oversights in the patch that added CREATE INDEX IF NOT EXISTS (missing processing of IndexStmt.if_not_exists). These appear relatively harmless, but still sloppy coding practice.
2015-02-22 19:59:09 +01:00
/* Mark statement as successfully transformed */
stmt->transformed = true;
return stmt;
}
/*
* transformRuleStmt -
* transform a CREATE RULE Statement. The action is a list of parse
* trees which is transformed into a list of query trees, and we also
* transform the WHERE clause if any.
*
* actions and whereClause are output parameters that receive the
* transformed results.
*
* Note that we must not scribble on the passed-in RuleStmt, so we do
* copyObject() on the actions and WHERE clause.
*/
void
transformRuleStmt(RuleStmt *stmt, const char *queryString,
List **actions, Node **whereClause)
{
Relation rel;
ParseState *pstate;
RangeTblEntry *oldrte;
RangeTblEntry *newrte;
/*
* To avoid deadlock, make sure the first thing we do is grab
* AccessExclusiveLock on the target relation. This will be needed by
* DefineQueryRewrite(), and we don't want to grab a lesser lock
* beforehand.
*/
rel = table_openrv(stmt->relation, AccessExclusiveLock);
if (rel->rd_rel->relkind == RELKIND_MATVIEW)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("rules on materialized views are not supported")));
/* Set up pstate */
pstate = make_parsestate(NULL);
pstate->p_sourcetext = queryString;
/*
* NOTE: 'OLD' must always have a varno equal to 1 and 'NEW' equal to 2.
* Set up their RTEs in the main pstate for use in parsing the rule
* qualification.
*/
oldrte = addRangeTableEntryForRelation(pstate, rel,
AccessShareLock,
makeAlias("old", NIL),
false, false);
newrte = addRangeTableEntryForRelation(pstate, rel,
AccessShareLock,
makeAlias("new", NIL),
false, false);
/* Must override addRangeTableEntry's default access-check flags */
oldrte->requiredPerms = 0;
newrte->requiredPerms = 0;
/*
* They must be in the namespace too for lookup purposes, but only add the
* one(s) that are relevant for the current kind of rule. In an UPDATE
* rule, quals must refer to OLD.field or NEW.field to be unambiguous, but
* there's no need to be so picky for INSERT & DELETE. We do not add them
* to the joinlist.
*/
switch (stmt->event)
{
case CMD_SELECT:
addRTEtoQuery(pstate, oldrte, false, true, true);
break;
case CMD_UPDATE:
addRTEtoQuery(pstate, oldrte, false, true, true);
addRTEtoQuery(pstate, newrte, false, true, true);
break;
case CMD_INSERT:
addRTEtoQuery(pstate, newrte, false, true, true);
break;
case CMD_DELETE:
addRTEtoQuery(pstate, oldrte, false, true, true);
break;
default:
elog(ERROR, "unrecognized event type: %d",
(int) stmt->event);
break;
}
/* take care of the where clause */
*whereClause = transformWhereClause(pstate,
(Node *) copyObject(stmt->whereClause),
EXPR_KIND_WHERE,
"WHERE");
/* we have to fix its collations too */
assign_expr_collations(pstate, *whereClause);
/* this is probably dead code without add_missing_from: */
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
if (list_length(pstate->p_rtable) != 2) /* naughty, naughty... */
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("rule WHERE condition cannot contain references to other relations")));
/*
* 'instead nothing' rules with a qualification need a query rangetable so
* the rewrite handler can add the negated rule qualification to the
* original query. We create a query with the new command type CMD_NOTHING
* here that is treated specially by the rewrite system.
*/
if (stmt->actions == NIL)
{
Query *nothing_qry = makeNode(Query);
nothing_qry->commandType = CMD_NOTHING;
nothing_qry->rtable = pstate->p_rtable;
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
nothing_qry->jointree = makeFromExpr(NIL, NULL); /* no join wanted */
*actions = list_make1(nothing_qry);
}
else
{
ListCell *l;
List *newactions = NIL;
/*
* transform each statement, like parse_sub_analyze()
*/
foreach(l, stmt->actions)
{
Node *action = (Node *) lfirst(l);
ParseState *sub_pstate = make_parsestate(NULL);
Query *sub_qry,
*top_subqry;
bool has_old,
has_new;
/*
2007-11-15 22:14:46 +01:00
* Since outer ParseState isn't parent of inner, have to pass down
* the query text by hand.
*/
sub_pstate->p_sourcetext = queryString;
/*
* Set up OLD/NEW in the rtable for this statement. The entries
* are added only to relnamespace, not varnamespace, because we
* don't want them to be referred to by unqualified field names
* nor "*" in the rule actions. We decide later whether to put
* them in the joinlist.
*/
oldrte = addRangeTableEntryForRelation(sub_pstate, rel,
AccessShareLock,
makeAlias("old", NIL),
false, false);
newrte = addRangeTableEntryForRelation(sub_pstate, rel,
AccessShareLock,
makeAlias("new", NIL),
false, false);
oldrte->requiredPerms = 0;
newrte->requiredPerms = 0;
addRTEtoQuery(sub_pstate, oldrte, false, true, false);
addRTEtoQuery(sub_pstate, newrte, false, true, false);
/* Transform the rule action statement */
top_subqry = transformStmt(sub_pstate,
(Node *) copyObject(action));
/*
* We cannot support utility-statement actions (eg NOTIFY) with
* nonempty rule WHERE conditions, because there's no way to make
* the utility action execute conditionally.
*/
if (top_subqry->commandType == CMD_UTILITY &&
*whereClause != NULL)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("rules with WHERE conditions can only have SELECT, INSERT, UPDATE, or DELETE actions")));
/*
* If the action is INSERT...SELECT, OLD/NEW have been pushed down
* into the SELECT, and that's what we need to look at. (Ugly
* kluge ... try to fix this when we redesign querytrees.)
*/
sub_qry = getInsertSelectQuery(top_subqry, NULL);
/*
* If the sub_qry is a setop, we cannot attach any qualifications
* to it, because the planner won't notice them. This could
* perhaps be relaxed someday, but for now, we may as well reject
* such a rule immediately.
*/
if (sub_qry->setOperations != NULL && *whereClause != NULL)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("conditional UNION/INTERSECT/EXCEPT statements are not implemented")));
/*
* Validate action's use of OLD/NEW, qual too
*/
has_old =
rangeTableEntry_used((Node *) sub_qry, PRS2_OLD_VARNO, 0) ||
rangeTableEntry_used(*whereClause, PRS2_OLD_VARNO, 0);
has_new =
rangeTableEntry_used((Node *) sub_qry, PRS2_NEW_VARNO, 0) ||
rangeTableEntry_used(*whereClause, PRS2_NEW_VARNO, 0);
switch (stmt->event)
{
case CMD_SELECT:
if (has_old)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("ON SELECT rule cannot use OLD")));
if (has_new)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("ON SELECT rule cannot use NEW")));
break;
case CMD_UPDATE:
/* both are OK */
break;
case CMD_INSERT:
if (has_old)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("ON INSERT rule cannot use OLD")));
break;
case CMD_DELETE:
if (has_new)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("ON DELETE rule cannot use NEW")));
break;
default:
elog(ERROR, "unrecognized event type: %d",
(int) stmt->event);
break;
}
/*
* OLD/NEW are not allowed in WITH queries, because they would
* amount to outer references for the WITH, which we disallow.
* However, they were already in the outer rangetable when we
* analyzed the query, so we have to check.
*
2011-04-10 17:42:00 +02:00
* Note that in the INSERT...SELECT case, we need to examine the
* CTE lists of both top_subqry and sub_qry.
*
2011-04-10 17:42:00 +02:00
* Note that we aren't digging into the body of the query looking
* for WITHs in nested sub-SELECTs. A WITH down there can
* legitimately refer to OLD/NEW, because it'd be an
* indirect-correlated outer reference.
*/
if (rangeTableEntry_used((Node *) top_subqry->cteList,
PRS2_OLD_VARNO, 0) ||
rangeTableEntry_used((Node *) sub_qry->cteList,
2011-04-10 17:42:00 +02:00
PRS2_OLD_VARNO, 0))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot refer to OLD within WITH query")));
if (rangeTableEntry_used((Node *) top_subqry->cteList,
PRS2_NEW_VARNO, 0) ||
rangeTableEntry_used((Node *) sub_qry->cteList,
PRS2_NEW_VARNO, 0))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot refer to NEW within WITH query")));
/*
* For efficiency's sake, add OLD to the rule action's jointree
* only if it was actually referenced in the statement or qual.
*
* For INSERT, NEW is not really a relation (only a reference to
* the to-be-inserted tuple) and should never be added to the
* jointree.
*
* For UPDATE, we treat NEW as being another kind of reference to
* OLD, because it represents references to *transformed* tuples
* of the existing relation. It would be wrong to enter NEW
* separately in the jointree, since that would cause a double
* join of the updated relation. It's also wrong to fail to make
* a jointree entry if only NEW and not OLD is mentioned.
*/
if (has_old || (has_new && stmt->event == CMD_UPDATE))
{
/*
* If sub_qry is a setop, manipulating its jointree will do no
* good at all, because the jointree is dummy. (This should be
* a can't-happen case because of prior tests.)
*/
if (sub_qry->setOperations != NULL)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("conditional UNION/INTERSECT/EXCEPT statements are not implemented")));
/* hack so we can use addRTEtoQuery() */
sub_pstate->p_rtable = sub_qry->rtable;
sub_pstate->p_joinlist = sub_qry->jointree->fromlist;
addRTEtoQuery(sub_pstate, oldrte, true, false, false);
sub_qry->jointree->fromlist = sub_pstate->p_joinlist;
}
newactions = lappend(newactions, top_subqry);
free_parsestate(sub_pstate);
}
*actions = newactions;
}
free_parsestate(pstate);
/* Close relation, but keep the exclusive lock */
table_close(rel, NoLock);
}
/*
* transformAlterTableStmt -
* parse analysis for ALTER TABLE
*
* Returns a List of utility commands to be done in sequence. One of these
* will be the transformed AlterTableStmt, but there may be additional actions
* to be done before and after the actual AlterTable() call.
*
* To avoid race conditions, it's important that this function rely only on
* the passed-in relid (and not on stmt->relation) to determine the target
* relation.
*/
List *
transformAlterTableStmt(Oid relid, AlterTableStmt *stmt,
const char *queryString)
{
Relation rel;
TupleDesc tupdesc;
ParseState *pstate;
CreateStmtContext cxt;
List *result;
List *save_alist;
ListCell *lcmd,
*l;
List *newcmds = NIL;
bool skipValidation = true;
AlterTableCmd *newcmd;
RangeTblEntry *rte;
/*
2007-11-15 22:14:46 +01:00
* We must not scribble on the passed-in AlterTableStmt, so copy it. (This
* is overkill, but easy.)
*/
stmt = copyObject(stmt);
/* Caller is responsible for locking the relation */
rel = relation_open(relid, NoLock);
tupdesc = RelationGetDescr(rel);
/* Set up pstate */
pstate = make_parsestate(NULL);
pstate->p_sourcetext = queryString;
rte = addRangeTableEntryForRelation(pstate,
rel,
AccessShareLock,
NULL,
false,
true);
addRTEtoQuery(pstate, rte, false, true, true);
/* Set up CreateStmtContext */
cxt.pstate = pstate;
if (stmt->relkind == OBJECT_FOREIGN_TABLE)
{
cxt.stmtType = "ALTER FOREIGN TABLE";
cxt.isforeign = true;
}
else
{
cxt.stmtType = "ALTER TABLE";
cxt.isforeign = false;
}
cxt.relation = stmt->relation;
cxt.rel = rel;
cxt.inhRelations = NIL;
cxt.isalter = true;
cxt.columns = NIL;
cxt.ckconstraints = NIL;
cxt.fkconstraints = NIL;
cxt.ixconstraints = NIL;
cxt.inh_indexes = NIL;
cxt.extstats = NIL;
cxt.blist = NIL;
cxt.alist = NIL;
cxt.pkey = NULL;
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
cxt.ispartitioned = (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE);
cxt.partbound = NULL;
cxt.ofType = false;
/*
* The only subtypes that currently require parse transformation handling
* are ADD COLUMN, ADD CONSTRAINT and SET DATA TYPE. These largely re-use
* code from CREATE TABLE.
*/
foreach(lcmd, stmt->cmds)
{
AlterTableCmd *cmd = (AlterTableCmd *) lfirst(lcmd);
switch (cmd->subtype)
{
case AT_AddColumn:
case AT_AddColumnToView:
{
2017-02-21 17:33:07 +01:00
ColumnDef *def = castNode(ColumnDef, cmd->def);
transformColumnDefinition(&cxt, def);
/*
* If the column has a non-null default, we can't skip
* validation of foreign keys.
*/
if (def->raw_default != NULL)
skipValidation = false;
/*
* All constraints are processed in other ways. Remove the
* original list
*/
def->constraints = NIL;
newcmds = lappend(newcmds, cmd);
break;
}
case AT_AddConstraint:
/*
* The original AddConstraint cmd node doesn't go to newcmds
*/
if (IsA(cmd->def, Constraint))
{
transformTableConstraint(&cxt, (Constraint *) cmd->def);
if (((Constraint *) cmd->def)->contype == CONSTR_FOREIGN)
skipValidation = false;
}
else
elog(ERROR, "unrecognized node type: %d",
(int) nodeTag(cmd->def));
break;
case AT_ProcessedConstraint:
/*
* Already-transformed ADD CONSTRAINT, so just make it look
* like the standard case.
*/
cmd->subtype = AT_AddConstraint;
newcmds = lappend(newcmds, cmd);
break;
case AT_AlterColumnType:
{
2015-05-24 03:35:49 +02:00
ColumnDef *def = (ColumnDef *) cmd->def;
AttrNumber attnum;
/*
* For ALTER COLUMN TYPE, transform the USING clause if
* one was specified.
*/
if (def->raw_default)
{
def->cooked_default =
transformExpr(pstate, def->raw_default,
EXPR_KIND_ALTER_COL_TRANSFORM);
}
/*
* For identity column, create ALTER SEQUENCE command to
* change the data type of the sequence.
*/
attnum = get_attnum(relid, cmd->name);
/*
* if attribute not found, something will error about it
* later
*/
if (attnum != InvalidAttrNumber &&
TupleDescAttr(tupdesc, attnum - 1)->attidentity)
{
Oid seq_relid = getOwnedSequence(relid, attnum);
Oid typeOid = typenameTypeId(pstate, def->typeName);
AlterSeqStmt *altseqstmt = makeNode(AlterSeqStmt);
altseqstmt->sequence = makeRangeVar(get_namespace_name(get_rel_namespace(seq_relid)),
get_rel_name(seq_relid),
-1);
altseqstmt->options = list_make1(makeDefElem("as", (Node *) makeTypeNameFromOid(typeOid, -1), -1));
altseqstmt->for_identity = true;
cxt.blist = lappend(cxt.blist, altseqstmt);
}
newcmds = lappend(newcmds, cmd);
break;
}
case AT_AddIdentity:
{
Constraint *def = castNode(Constraint, cmd->def);
ColumnDef *newdef = makeNode(ColumnDef);
AttrNumber attnum;
newdef->colname = cmd->name;
newdef->identity = def->generated_when;
cmd->def = (Node *) newdef;
attnum = get_attnum(relid, cmd->name);
/*
* if attribute not found, something will error about it
* later
*/
if (attnum != InvalidAttrNumber)
generateSerialExtraStmts(&cxt, newdef,
get_atttype(relid, attnum),
def->options, true,
NULL, NULL);
newcmds = lappend(newcmds, cmd);
break;
}
case AT_SetIdentity:
{
/*
* Create an ALTER SEQUENCE statement for the internal
* sequence of the identity column.
*/
ListCell *lc;
List *newseqopts = NIL;
List *newdef = NIL;
List *seqlist;
AttrNumber attnum;
/*
* Split options into those handled by ALTER SEQUENCE and
* those for ALTER TABLE proper.
*/
foreach(lc, castNode(List, cmd->def))
{
DefElem *def = lfirst_node(DefElem, lc);
if (strcmp(def->defname, "generated") == 0)
newdef = lappend(newdef, def);
else
newseqopts = lappend(newseqopts, def);
}
attnum = get_attnum(relid, cmd->name);
if (attnum)
{
seqlist = getOwnedSequences(relid, attnum);
if (seqlist)
{
AlterSeqStmt *seqstmt;
Oid seq_relid;
seqstmt = makeNode(AlterSeqStmt);
seq_relid = linitial_oid(seqlist);
seqstmt->sequence = makeRangeVar(get_namespace_name(get_rel_namespace(seq_relid)),
get_rel_name(seq_relid), -1);
seqstmt->options = newseqopts;
seqstmt->for_identity = true;
seqstmt->missing_ok = false;
cxt.alist = lappend(cxt.alist, seqstmt);
}
}
/*
* If column was not found or was not an identity column,
* we just let the ALTER TABLE command error out later.
*/
cmd->def = (Node *) newdef;
newcmds = lappend(newcmds, cmd);
break;
}
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
case AT_AttachPartition:
case AT_DetachPartition:
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
{
PartitionCmd *partcmd = (PartitionCmd *) cmd->def;
transformPartitionCmd(&cxt, partcmd);
/* assign transformed value of the partition bound */
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
partcmd->bound = cxt.partbound;
}
newcmds = lappend(newcmds, cmd);
break;
default:
newcmds = lappend(newcmds, cmd);
break;
}
}
/*
* transformIndexConstraints wants cxt.alist to contain only index
* statements, so transfer anything we already have into save_alist
* immediately.
*/
save_alist = cxt.alist;
cxt.alist = NIL;
/* Postprocess constraints */
transformIndexConstraints(&cxt);
transformFKConstraints(&cxt, skipValidation, true);
transformCheckConstraints(&cxt, false);
/*
* Push any index-creation commands into the ALTER, so that they can be
* scheduled nicely by tablecmds.c. Note that tablecmds.c assumes that
* the IndexStmt attached to an AT_AddIndex or AT_AddIndexConstraint
* subcommand has already been through transformIndexStmt.
*/
foreach(l, cxt.alist)
{
IndexStmt *idxstmt = lfirst_node(IndexStmt, l);
idxstmt = transformIndexStmt(relid, idxstmt, queryString);
newcmd = makeNode(AlterTableCmd);
newcmd->subtype = OidIsValid(idxstmt->indexOid) ? AT_AddIndexConstraint : AT_AddIndex;
newcmd->def = (Node *) idxstmt;
newcmds = lappend(newcmds, newcmd);
}
cxt.alist = NIL;
/* Append any CHECK or FK constraints to the commands list */
foreach(l, cxt.ckconstraints)
{
newcmd = makeNode(AlterTableCmd);
newcmd->subtype = AT_AddConstraint;
newcmd->def = (Node *) lfirst(l);
newcmds = lappend(newcmds, newcmd);
}
foreach(l, cxt.fkconstraints)
{
newcmd = makeNode(AlterTableCmd);
newcmd->subtype = AT_AddConstraint;
newcmd->def = (Node *) lfirst(l);
newcmds = lappend(newcmds, newcmd);
}
/* Append extended statistic objects */
transformExtendedStatistics(&cxt);
/* Close rel */
relation_close(rel, NoLock);
/*
* Output results.
*/
stmt->cmds = newcmds;
result = lappend(cxt.blist, stmt);
result = list_concat(result, cxt.alist);
result = list_concat(result, save_alist);
return result;
}
/*
* Preprocess a list of column constraint clauses
* to attach constraint attributes to their primary constraint nodes
* and detect inconsistent/misplaced constraint attributes.
*
* NOTE: currently, attributes are only supported for FOREIGN KEY, UNIQUE,
* EXCLUSION, and PRIMARY KEY constraints, but someday they ought to be
* supported for other constraint types.
*/
static void
transformConstraintAttrs(CreateStmtContext *cxt, List *constraintList)
{
Constraint *lastprimarycon = NULL;
bool saw_deferrability = false;
bool saw_initially = false;
ListCell *clist;
#define SUPPORTS_ATTRS(node) \
((node) != NULL && \
((node)->contype == CONSTR_PRIMARY || \
(node)->contype == CONSTR_UNIQUE || \
(node)->contype == CONSTR_EXCLUSION || \
(node)->contype == CONSTR_FOREIGN))
foreach(clist, constraintList)
{
Constraint *con = (Constraint *) lfirst(clist);
if (!IsA(con, Constraint))
elog(ERROR, "unrecognized node type: %d",
(int) nodeTag(con));
switch (con->contype)
{
case CONSTR_ATTR_DEFERRABLE:
if (!SUPPORTS_ATTRS(lastprimarycon))
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("misplaced DEFERRABLE clause"),
parser_errposition(cxt->pstate, con->location)));
if (saw_deferrability)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("multiple DEFERRABLE/NOT DEFERRABLE clauses not allowed"),
parser_errposition(cxt->pstate, con->location)));
saw_deferrability = true;
lastprimarycon->deferrable = true;
break;
case CONSTR_ATTR_NOT_DEFERRABLE:
if (!SUPPORTS_ATTRS(lastprimarycon))
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("misplaced NOT DEFERRABLE clause"),
parser_errposition(cxt->pstate, con->location)));
if (saw_deferrability)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("multiple DEFERRABLE/NOT DEFERRABLE clauses not allowed"),
parser_errposition(cxt->pstate, con->location)));
saw_deferrability = true;
lastprimarycon->deferrable = false;
if (saw_initially &&
lastprimarycon->initdeferred)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("constraint declared INITIALLY DEFERRED must be DEFERRABLE"),
parser_errposition(cxt->pstate, con->location)));
break;
case CONSTR_ATTR_DEFERRED:
if (!SUPPORTS_ATTRS(lastprimarycon))
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("misplaced INITIALLY DEFERRED clause"),
parser_errposition(cxt->pstate, con->location)));
if (saw_initially)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("multiple INITIALLY IMMEDIATE/DEFERRED clauses not allowed"),
parser_errposition(cxt->pstate, con->location)));
saw_initially = true;
lastprimarycon->initdeferred = true;
/*
* If only INITIALLY DEFERRED appears, assume DEFERRABLE
*/
if (!saw_deferrability)
lastprimarycon->deferrable = true;
else if (!lastprimarycon->deferrable)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("constraint declared INITIALLY DEFERRED must be DEFERRABLE"),
parser_errposition(cxt->pstate, con->location)));
break;
case CONSTR_ATTR_IMMEDIATE:
if (!SUPPORTS_ATTRS(lastprimarycon))
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("misplaced INITIALLY IMMEDIATE clause"),
parser_errposition(cxt->pstate, con->location)));
if (saw_initially)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("multiple INITIALLY IMMEDIATE/DEFERRED clauses not allowed"),
parser_errposition(cxt->pstate, con->location)));
saw_initially = true;
lastprimarycon->initdeferred = false;
break;
default:
/* Otherwise it's not an attribute */
lastprimarycon = con;
/* reset flags for new primary node */
saw_deferrability = false;
saw_initially = false;
break;
}
}
}
/*
* Special handling of type definition for a column
*/
static void
transformColumnType(CreateStmtContext *cxt, ColumnDef *column)
{
/*
Remove collation information from TypeName, where it does not belong. The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
2011-03-10 04:38:52 +01:00
* All we really need to do here is verify that the type is valid,
* including any collation spec that might be present.
*/
Remove collation information from TypeName, where it does not belong. The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
2011-03-10 04:38:52 +01:00
Type ctype = typenameType(cxt->pstate, column->typeName, NULL);
if (column->collClause)
{
Form_pg_type typtup = (Form_pg_type) GETSTRUCT(ctype);
LookupCollation(cxt->pstate,
2011-06-09 20:32:50 +02:00
column->collClause->collname,
column->collClause->location);
Remove collation information from TypeName, where it does not belong. The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
2011-03-10 04:38:52 +01:00
/* Complain if COLLATE is applied to an uncollatable type */
if (!OidIsValid(typtup->typcollation))
ereport(ERROR,
(errcode(ERRCODE_DATATYPE_MISMATCH),
errmsg("collations are not supported by type %s",
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
format_type_be(typtup->oid)),
Remove collation information from TypeName, where it does not belong. The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
2011-03-10 04:38:52 +01:00
parser_errposition(cxt->pstate,
column->collClause->location)));
}
ReleaseSysCache(ctype);
}
/*
* transformCreateSchemaStmt -
* analyzes the CREATE SCHEMA statement
*
* Split the schema element list into individual commands and place
* them in the result list in an order such that there are no forward
* references (e.g. GRANT to a table created later in the list). Note
* that the logic we use for determining forward references is
* presently quite incomplete.
*
* SQL also allows constraints to make forward references, so thumb through
* the table columns and move forward references to a posterior alter-table
* command.
*
* The result is a list of parse nodes that still need to be analyzed ---
* but we can't analyze the later commands until we've executed the earlier
* ones, because of possible inter-object references.
*
* Note: this breaks the rules a little bit by modifying schema-name fields
* within passed-in structs. However, the transformation would be the same
* if done over, so it should be all right to scribble on the input to this
* extent.
*/
List *
transformCreateSchemaStmt(CreateSchemaStmt *stmt)
{
CreateSchemaStmtContext cxt;
List *result;
ListCell *elements;
cxt.stmtType = "CREATE SCHEMA";
cxt.schemaname = stmt->schemaname;
cxt.authrole = (RoleSpec *) stmt->authrole;
cxt.sequences = NIL;
cxt.tables = NIL;
cxt.views = NIL;
cxt.indexes = NIL;
cxt.triggers = NIL;
cxt.grants = NIL;
/*
* Run through each schema element in the schema element list. Separate
* statements by type, and do preliminary analysis.
*/
foreach(elements, stmt->schemaElts)
{
Node *element = lfirst(elements);
switch (nodeTag(element))
{
case T_CreateSeqStmt:
{
CreateSeqStmt *elp = (CreateSeqStmt *) element;
setSchemaName(cxt.schemaname, &elp->sequence->schemaname);
cxt.sequences = lappend(cxt.sequences, element);
}
break;
case T_CreateStmt:
{
CreateStmt *elp = (CreateStmt *) element;
setSchemaName(cxt.schemaname, &elp->relation->schemaname);
/*
* XXX todo: deal with constraints
*/
cxt.tables = lappend(cxt.tables, element);
}
break;
case T_ViewStmt:
{
ViewStmt *elp = (ViewStmt *) element;
setSchemaName(cxt.schemaname, &elp->view->schemaname);
/*
* XXX todo: deal with references between views
*/
cxt.views = lappend(cxt.views, element);
}
break;
case T_IndexStmt:
{
IndexStmt *elp = (IndexStmt *) element;
setSchemaName(cxt.schemaname, &elp->relation->schemaname);
cxt.indexes = lappend(cxt.indexes, element);
}
break;
case T_CreateTrigStmt:
{
CreateTrigStmt *elp = (CreateTrigStmt *) element;
setSchemaName(cxt.schemaname, &elp->relation->schemaname);
cxt.triggers = lappend(cxt.triggers, element);
}
break;
case T_GrantStmt:
cxt.grants = lappend(cxt.grants, element);
break;
default:
elog(ERROR, "unrecognized node type: %d",
(int) nodeTag(element));
}
}
result = NIL;
result = list_concat(result, cxt.sequences);
result = list_concat(result, cxt.tables);
result = list_concat(result, cxt.views);
result = list_concat(result, cxt.indexes);
result = list_concat(result, cxt.triggers);
result = list_concat(result, cxt.grants);
return result;
}
/*
* setSchemaName
* Set or check schema name in an element of a CREATE SCHEMA command
*/
static void
setSchemaName(char *context_schema, char **stmt_schema_name)
{
if (*stmt_schema_name == NULL)
*stmt_schema_name = context_schema;
else if (strcmp(context_schema, *stmt_schema_name) != 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_SCHEMA_DEFINITION),
errmsg("CREATE specifies a schema (%s) "
"different from the one being created (%s)",
*stmt_schema_name, context_schema)));
}
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
/*
* transformPartitionCmd
* Analyze the ATTACH/DETACH PARTITION command
*
* In case of the ATTACH PARTITION command, cxt->partbound is set to the
* transformed value of cmd->bound.
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
*/
static void
transformPartitionCmd(CreateStmtContext *cxt, PartitionCmd *cmd)
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
{
Relation parentRel = cxt->rel;
switch (parentRel->rd_rel->relkind)
{
case RELKIND_PARTITIONED_TABLE:
/* transform the partition bound, if any */
Assert(RelationGetPartitionKey(parentRel) != NULL);
if (cmd->bound != NULL)
cxt->partbound = transformPartitionBound(cxt->pstate, parentRel,
cmd->bound);
break;
case RELKIND_PARTITIONED_INDEX:
/* nothing to check */
Assert(cmd->bound == NULL);
break;
case RELKIND_RELATION:
/* the table must be partitioned */
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("table \"%s\" is not partitioned",
RelationGetRelationName(parentRel))));
break;
case RELKIND_INDEX:
/* the index must be partitioned */
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("index \"%s\" is not partitioned",
RelationGetRelationName(parentRel))));
break;
default:
/* parser shouldn't let this case through */
elog(ERROR, "\"%s\" is not a partitioned table or index",
RelationGetRelationName(parentRel));
break;
}
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
}
/*
* transformPartitionBound
*
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
* Transform a partition bound specification
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
*/
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
PartitionBoundSpec *
transformPartitionBound(ParseState *pstate, Relation parent,
PartitionBoundSpec *spec)
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
{
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
PartitionBoundSpec *result_spec;
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
PartitionKey key = RelationGetPartitionKey(parent);
char strategy = get_partition_strategy(key);
int partnatts = get_partition_natts(key);
List *partexprs = get_partition_exprs(key);
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
/* Avoid scribbling on input */
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
result_spec = copyObject(spec);
if (spec->is_default)
{
if (strategy == PARTITION_STRATEGY_HASH)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
errmsg("a hash-partitioned table may not have a default partition")));
/*
* In case of the default partition, parser had no way to identify the
* partition strategy. Assign the parent's strategy to the default
* partition bound spec.
*/
result_spec->strategy = strategy;
return result_spec;
}
if (strategy == PARTITION_STRATEGY_HASH)
{
if (spec->strategy != PARTITION_STRATEGY_HASH)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
errmsg("invalid bound specification for a hash partition"),
parser_errposition(pstate, exprLocation((Node *) spec))));
if (spec->modulus <= 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
errmsg("modulus for hash partition must be a positive integer")));
Assert(spec->remainder >= 0);
if (spec->remainder >= spec->modulus)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
errmsg("remainder for hash partition must be less than modulus")));
}
else if (strategy == PARTITION_STRATEGY_LIST)
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
{
ListCell *cell;
char *colname;
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
Oid coltype;
int32 coltypmod;
Oid partcollation;
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
if (spec->strategy != PARTITION_STRATEGY_LIST)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
errmsg("invalid bound specification for a list partition"),
parser_errposition(pstate, exprLocation((Node *) spec))));
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
/* Get the only column's name in case we need to output an error */
if (key->partattrs[0] != 0)
colname = get_attname(RelationGetRelid(parent),
key->partattrs[0], false);
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
else
colname = deparse_expression((Node *) linitial(partexprs),
deparse_context_for(RelationGetRelationName(parent),
RelationGetRelid(parent)),
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
false, false);
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
/* Need its type data too */
coltype = get_partition_col_typid(key, 0);
coltypmod = get_partition_col_typmod(key, 0);
partcollation = get_partition_col_collation(key, 0);
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
result_spec->listdatums = NIL;
foreach(cell, spec->listdatums)
{
Node *expr = lfirst(cell);
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
Const *value;
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
ListCell *cell2;
bool duplicate;
value = transformPartitionBoundValue(pstate, expr,
colname, coltype, coltypmod,
partcollation);
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
/* Don't add to the result if the value is a duplicate */
duplicate = false;
foreach(cell2, result_spec->listdatums)
{
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
Const *value2 = castNode(Const, lfirst(cell2));
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
if (equal(value, value2))
{
duplicate = true;
break;
}
}
if (duplicate)
continue;
result_spec->listdatums = lappend(result_spec->listdatums,
value);
}
}
else if (strategy == PARTITION_STRATEGY_RANGE)
{
if (spec->strategy != PARTITION_STRATEGY_RANGE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
errmsg("invalid bound specification for a range partition"),
parser_errposition(pstate, exprLocation((Node *) spec))));
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
if (list_length(spec->lowerdatums) != partnatts)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
errmsg("FROM must specify exactly one value per partitioning column")));
if (list_length(spec->upperdatums) != partnatts)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLE_DEFINITION),
errmsg("TO must specify exactly one value per partitioning column")));
/*
* Convert raw parse nodes into PartitionRangeDatum nodes and perform
* any necessary validation.
*/
result_spec->lowerdatums =
transformPartitionRangeBounds(pstate, spec->lowerdatums,
parent);
result_spec->upperdatums =
transformPartitionRangeBounds(pstate, spec->upperdatums,
parent);
}
else
elog(ERROR, "unexpected partition strategy: %d", (int) strategy);
return result_spec;
}
/*
* transformPartitionRangeBounds
* This converts the expressions for range partition bounds from the raw
* grammar representation to PartitionRangeDatum structs
*/
static List *
transformPartitionRangeBounds(ParseState *pstate, List *blist,
Relation parent)
{
List *result = NIL;
PartitionKey key = RelationGetPartitionKey(parent);
List *partexprs = get_partition_exprs(key);
ListCell *lc;
int i,
j;
i = j = 0;
foreach(lc, blist)
{
Node *expr = lfirst(lc);
PartitionRangeDatum *prd = NULL;
/*
* Infinite range bounds -- "minvalue" and "maxvalue" -- get passed
* in as ColumnRefs.
*/
if (IsA(expr, ColumnRef))
{
ColumnRef *cref = (ColumnRef *) expr;
char *cname = NULL;
/*
* There should be a single field named either "minvalue" or
* "maxvalue".
*/
if (list_length(cref->fields) == 1 &&
IsA(linitial(cref->fields), String))
cname = strVal(linitial(cref->fields));
if (cname == NULL)
{
/*
* ColumnRef is not in the desired single-field-name form.
* For consistency between all partition strategies, let the
* expression transformation report any errors rather than
* doing it ourselves.
*/
}
else if (strcmp("minvalue", cname) == 0)
{
prd = makeNode(PartitionRangeDatum);
prd->kind = PARTITION_RANGE_DATUM_MINVALUE;
prd->value = NULL;
}
else if (strcmp("maxvalue", cname) == 0)
{
prd = makeNode(PartitionRangeDatum);
prd->kind = PARTITION_RANGE_DATUM_MAXVALUE;
prd->value = NULL;
}
}
if (prd == NULL)
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
{
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
char *colname;
Oid coltype;
int32 coltypmod;
Oid partcollation;
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
Const *value;
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
/* Get the column's name in case we need to output an error */
if (key->partattrs[i] != 0)
colname = get_attname(RelationGetRelid(parent),
key->partattrs[i], false);
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
else
{
colname = deparse_expression((Node *) list_nth(partexprs, j),
deparse_context_for(RelationGetRelationName(parent),
RelationGetRelid(parent)),
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
false, false);
++j;
}
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
/* Need its type data too */
coltype = get_partition_col_typid(key, i);
coltypmod = get_partition_col_typmod(key, i);
partcollation = get_partition_col_collation(key, i);
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
value = transformPartitionBoundValue(pstate, expr,
colname,
coltype, coltypmod,
partcollation);
if (value->constisnull)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("cannot specify NULL in range bound")));
prd = makeNode(PartitionRangeDatum);
prd->kind = PARTITION_RANGE_DATUM_VALUE;
prd->value = (Node *) value;
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
++i;
}
prd->location = exprLocation(expr);
result = lappend(result, prd);
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
}
/*
* Once we see MINVALUE or MAXVALUE for one column, the remaining
* columns must be the same.
*/
validateInfiniteBounds(pstate, result);
return result;
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
}
/*
* validateInfiniteBounds
*
* Check that a MAXVALUE or MINVALUE specification in a partition bound is
* followed only by more of the same.
*/
static void
validateInfiniteBounds(ParseState *pstate, List *blist)
{
ListCell *lc;
PartitionRangeDatumKind kind = PARTITION_RANGE_DATUM_VALUE;
foreach(lc, blist)
{
PartitionRangeDatum *prd = castNode(PartitionRangeDatum, lfirst(lc));
if (kind == prd->kind)
continue;
switch (kind)
{
case PARTITION_RANGE_DATUM_VALUE:
kind = prd->kind;
break;
case PARTITION_RANGE_DATUM_MAXVALUE:
ereport(ERROR,
(errcode(ERRCODE_DATATYPE_MISMATCH),
errmsg("every bound following MAXVALUE must also be MAXVALUE"),
parser_errposition(pstate, exprLocation((Node *) prd))));
break;
case PARTITION_RANGE_DATUM_MINVALUE:
ereport(ERROR,
(errcode(ERRCODE_DATATYPE_MISMATCH),
errmsg("every bound following MINVALUE must also be MINVALUE"),
parser_errposition(pstate, exprLocation((Node *) prd))));
break;
}
}
}
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
/*
* Transform one constant in a partition bound spec
*/
static Const *
transformPartitionBoundValue(ParseState *pstate, Node *val,
const char *colName, Oid colType, int32 colTypmod,
Oid partCollation)
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
{
Node *value;
/* Transform raw parsetree */
value = transformExpr(pstate, val, EXPR_KIND_PARTITION_BOUND);
/*
* Check that the input expression's collation is compatible with one
* specified for the parent's partition key (partcollation). Don't
* throw an error if it's the default collation which we'll replace with
* the parent's collation anyway.
*/
if (IsA(value, CollateExpr))
{
Oid exprCollOid = exprCollation(value);
if (OidIsValid(exprCollOid) &&
exprCollOid != DEFAULT_COLLATION_OID &&
exprCollOid != partCollation)
ereport(ERROR,
(errcode(ERRCODE_DATATYPE_MISMATCH),
errmsg("collation of partition bound value for column \"%s\" does not match partition key collation \"%s\"",
colName, get_collation_name(partCollation)),
parser_errposition(pstate, exprLocation(value))));
}
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
/* Coerce to correct type */
value = coerce_to_target_type(pstate,
value, exprType(value),
colType,
colTypmod,
COERCION_ASSIGNMENT,
COERCE_IMPLICIT_CAST,
-1);
if (value == NULL)
ereport(ERROR,
(errcode(ERRCODE_DATATYPE_MISMATCH),
errmsg("specified value cannot be cast to type %s for column \"%s\"",
format_type_be(colType), colName),
parser_errposition(pstate, exprLocation(val))));
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
/* Simplify the expression, in case we had a coercion */
if (!IsA(value, Const))
value = (Node *) expression_planner((Expr *) value);
/* Make sure the expression does not refer to any vars. */
if (contain_var_clause(value))
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
ereport(ERROR,
(errcode(ERRCODE_INVALID_COLUMN_REFERENCE),
errmsg("cannot use column references in partition bound expression"),
parser_errposition(pstate, exprLocation(value))));
/*
* Evaluate the expression, assigning the partition key's collation to the
* resulting Const expression.
*/
value = (Node *) evaluate_expr((Expr *) value, colType, colTypmod,
partCollation);
if (!IsA(value, Const))
elog(ERROR, "could not evaluate partition bound expression");
Code review focused on new node types added by partitioning support. Fix failure to check that we got a plain Const from const-simplification of a coercion request. This is the cause of bug #14666 from Tian Bing: there is an int4 to money cast, but it's only stable not immutable (because of dependence on lc_monetary), resulting in a FuncExpr that the code was miserably unequipped to deal with, or indeed even to notice that it was failing to deal with. Add test cases around this coercion behavior. In view of the above, sprinkle the code liberally with castNode() macros, in hope of catching the next such bug a bit sooner. Also, change some functions that were randomly declared to take Node* to take more specific pointer types. And change some struct fields that were declared Node* but could be given more specific types, allowing removal of assorted explicit casts. Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting. Likewise check only-one-key-for-list-partitioning restriction in a less random place. Avoid not-per-project-style usages like !strcmp(...). Fix assorted failures to avoid scribbling on the input of parse transformation. I'm not sure how necessary this is, but it's entirely silly for these functions to be expending cycles to avoid that and not getting it right. Add guards against partitioning on system columns. Put backend/nodes/ support code into an order that matches handling of these node types elsewhere. Annotate the fact that somebody added location fields to PartitionBoundSpec and PartitionRangeDatum but forgot to handle them in outfuncs.c/readfuncs.c. This is fairly harmless for production purposes (since readfuncs.c would just substitute -1 anyway) but it's still bogus. It's not worth forcing a post-beta1 initdb just to fix this, but if we have another reason to force initdb before 10.0, we should go back and clean this up. Contrariwise, somebody added location fields to PartitionElem and PartitionSpec but forgot to teach exprLocation() about them. Consolidate duplicative code in transformPartitionBound(). Improve a couple of error messages. Improve assorted commentary. Re-pgindent the files touched by this patch; this affects a few comment blocks that must have been added quite recently. Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-29 05:20:28 +02:00
return (Const *) value;
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
}