2002-04-15 07:22:04 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* typecmds.c
|
|
|
|
* Routines for SQL commands that manipulate types (and domains).
|
|
|
|
*
|
2021-01-02 19:06:25 +01:00
|
|
|
* Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
|
2002-04-15 07:22:04 +02:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/backend/commands/typecmds.c
|
2002-04-15 07:22:04 +02:00
|
|
|
*
|
|
|
|
* DESCRIPTION
|
|
|
|
* The "DefineFoo" routines take the parse tree and pick out the
|
|
|
|
* appropriate arguments/flags, passing the results to the
|
|
|
|
* corresponding "FooDefine" routines (in src/catalog) that do
|
|
|
|
* the actual catalog-munging. These routines also verify permission
|
|
|
|
* of the user to execute the command.
|
|
|
|
*
|
|
|
|
* NOTES
|
|
|
|
* These things must be defined and committed in the following order:
|
|
|
|
* "create function":
|
2003-05-09 00:19:58 +02:00
|
|
|
* input/output, recv/send functions
|
2002-04-15 07:22:04 +02:00
|
|
|
* "create type":
|
|
|
|
* type
|
|
|
|
* "create operator":
|
|
|
|
* operators
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#include "postgres.h"
|
|
|
|
|
2019-12-27 00:09:00 +01:00
|
|
|
#include "access/genam.h"
|
Don't include heapam.h from others headers.
heapam.h previously was included in a number of widely used
headers (e.g. execnodes.h, indirectly in executor.h, ...). That's
problematic on its own, as heapam.h contains a lot of low-level
details that don't need to be exposed that widely, but becomes more
problematic with the upcoming introduction of pluggable table storage
- it seems inappropriate for heapam.h to be included that widely
afterwards.
heapam.h was largely only included in other headers to get the
HeapScanDesc typedef (which was defined in heapam.h, even though
HeapScanDescData is defined in relscan.h). The better solution here
seems to be to just use the underlying struct (forward declared where
necessary). Similar for BulkInsertState.
Another problem was that LockTupleMode was used in executor.h - parts
of the file tried to cope without heapam.h, but due to the fact that
it indirectly included it, several subsequent violations of that goal
were not not noticed. We could just reuse the approach of declaring
parameters as int, but it seems nicer to move LockTupleMode to
lockoptions.h - that's not a perfect location, but also doesn't seem
bad.
As a number of files relied on implicitly included heapam.h, a
significant number of files grew an explicit include. It's quite
probably that a few external projects will need to do the same.
Author: Andres Freund
Reviewed-By: Alvaro Herrera
Discussion: https://postgr.es/m/20190114000701.y4ttcb74jpskkcfb@alap3.anarazel.de
2019-01-15 00:54:18 +01:00
|
|
|
#include "access/heapam.h"
|
2012-08-30 22:15:44 +02:00
|
|
|
#include "access/htup_details.h"
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
#include "access/tableam.h"
|
2006-07-13 18:49:20 +02:00
|
|
|
#include "access/xact.h"
|
2013-12-19 22:10:01 +01:00
|
|
|
#include "catalog/binary_upgrade.h"
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
#include "catalog/catalog.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "catalog/heap.h"
|
2013-03-18 03:55:14 +01:00
|
|
|
#include "catalog/objectaccess.h"
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
#include "catalog/pg_am.h"
|
2012-05-31 05:47:57 +02:00
|
|
|
#include "catalog/pg_authid.h"
|
2020-03-10 15:28:23 +01:00
|
|
|
#include "catalog/pg_cast.h"
|
2011-02-08 22:04:18 +01:00
|
|
|
#include "catalog/pg_collation.h"
|
2002-11-15 03:50:21 +01:00
|
|
|
#include "catalog/pg_constraint.h"
|
2003-01-04 01:46:08 +01:00
|
|
|
#include "catalog/pg_depend.h"
|
2007-04-02 05:49:42 +02:00
|
|
|
#include "catalog/pg_enum.h"
|
2011-11-03 12:16:28 +01:00
|
|
|
#include "catalog/pg_language.h"
|
2005-08-01 06:03:59 +02:00
|
|
|
#include "catalog/pg_namespace.h"
|
2011-11-03 12:16:28 +01:00
|
|
|
#include "catalog/pg_proc.h"
|
|
|
|
#include "catalog/pg_range.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "catalog/pg_type.h"
|
|
|
|
#include "commands/defrem.h"
|
2002-08-15 18:36:08 +02:00
|
|
|
#include "commands/tablecmds.h"
|
2002-12-06 06:00:34 +01:00
|
|
|
#include "commands/typecmds.h"
|
|
|
|
#include "executor/executor.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "miscadmin.h"
|
2006-03-14 23:48:25 +01:00
|
|
|
#include "nodes/makefuncs.h"
|
2019-01-29 21:48:51 +01:00
|
|
|
#include "optimizer/optimizer.h"
|
2002-11-15 03:50:21 +01:00
|
|
|
#include "parser/parse_coerce.h"
|
2011-03-20 01:29:08 +01:00
|
|
|
#include "parser/parse_collate.h"
|
2002-11-15 03:50:21 +01:00
|
|
|
#include "parser/parse_expr.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "parser/parse_func.h"
|
|
|
|
#include "parser/parse_type.h"
|
|
|
|
#include "utils/builtins.h"
|
|
|
|
#include "utils/fmgroids.h"
|
Drop no-op CoerceToDomain nodes from expressions at planning time.
If a domain has no constraints, then CoerceToDomain doesn't really do
anything and can be simplified to a RelabelType. This not only
eliminates cycles at execution, but allows the planner to optimize better
(for instance, match the coerced expression to an index on the underlying
column). However, we do have to support invalidating the plan later if
a constraint gets added to the domain. That's comparable to the case of
a change to a SQL function that had been inlined into a plan, so all the
necessary logic already exists for plans depending on functions. We
need only duplicate or share that logic for domains.
ALTER DOMAIN ADD/DROP CONSTRAINT need to be taught to send out sinval
messages for the domain's pg_type entry, since those operations don't
update that row. (ALTER DOMAIN SET/DROP NOT NULL do update that row,
so no code change is needed for them.)
Testing this revealed what's really a pre-existing bug in plpgsql:
it caches the SQL-expression-tree expansion of type coercions and
had no provision for invalidating entries in that cache. Up to now
that was only a problem if such an expression had inlined a SQL
function that got changed, which is unlikely though not impossible.
But failing to track changes of domain constraints breaks an existing
regression test case and would likely cause practical problems too.
We could fix that locally in plpgsql, but what seems like a better
idea is to build some generic infrastructure in plancache.c to store
standalone expressions and track invalidation events for them.
(It's tempting to wonder whether plpgsql's "simple expression" stuff
could use this code with lower overhead than its current use of the
heavyweight plancache APIs. But I've left that idea for later.)
Other stuff fixed in passing:
* Allow estimate_expression_value() to drop CoerceToDomain
unconditionally, effectively assuming that the coercion will succeed.
This will improve planner selectivity estimates for cases involving
estimatable expressions that are coerced to domains. We could have
done this independently of everything else here, but there wasn't
previously any need for eval_const_expressions_mutator to know about
CoerceToDomain at all.
* Use a dlist for plancache.c's list of cached plans, rather than a
manually threaded singly-linked list. That eliminates a potential
performance problem in DropCachedPlan.
* Fix a couple of inconsistencies in typecmds.c about whether
operations on domains drop RowExclusiveLock on pg_type. Our common
practice is that DDL operations do drop catalog locks, so standardize
on that choice.
Discussion: https://postgr.es/m/19958.1544122124@sss.pgh.pa.us
2018-12-13 19:24:43 +01:00
|
|
|
#include "utils/inval.h"
|
2002-04-27 05:45:03 +02:00
|
|
|
#include "utils/lsyscache.h"
|
2005-05-06 19:24:55 +02:00
|
|
|
#include "utils/memutils.h"
|
2011-02-23 18:18:09 +01:00
|
|
|
#include "utils/rel.h"
|
2014-10-08 23:10:47 +02:00
|
|
|
#include "utils/ruleutils.h"
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
#include "utils/snapmgr.h"
|
2002-04-15 07:22:04 +02:00
|
|
|
#include "utils/syscache.h"
|
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
|
|
|
|
/* result structure for get_rels_with_domain() */
|
|
|
|
typedef struct
|
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
Relation rel; /* opened and locked relation */
|
|
|
|
int natts; /* number of attributes of interest */
|
|
|
|
int *atts; /* attribute numbers */
|
2003-01-04 01:46:08 +01:00
|
|
|
/* atts[] is of allocated length RelationGetNumberOfAttributes(rel) */
|
2003-08-08 23:42:59 +02:00
|
|
|
} RelToCheck;
|
2003-01-04 01:46:08 +01:00
|
|
|
|
2020-03-06 18:19:29 +01:00
|
|
|
/* parameter structure for AlterTypeRecurse() */
|
|
|
|
typedef struct
|
|
|
|
{
|
|
|
|
/* Flags indicating which type attributes to update */
|
|
|
|
bool updateStorage;
|
|
|
|
bool updateReceive;
|
|
|
|
bool updateSend;
|
|
|
|
bool updateTypmodin;
|
|
|
|
bool updateTypmodout;
|
|
|
|
bool updateAnalyze;
|
2020-12-12 00:07:02 +01:00
|
|
|
bool updateSubscript;
|
2020-03-06 18:19:29 +01:00
|
|
|
/* New values for relevant attributes */
|
|
|
|
char storage;
|
|
|
|
Oid receiveOid;
|
|
|
|
Oid sendOid;
|
|
|
|
Oid typmodinOid;
|
|
|
|
Oid typmodoutOid;
|
|
|
|
Oid analyzeOid;
|
2020-12-12 00:07:02 +01:00
|
|
|
Oid subscriptOid;
|
2020-03-06 18:19:29 +01:00
|
|
|
} AlterTypeRecurseParams;
|
|
|
|
|
2015-03-11 03:33:25 +01:00
|
|
|
/* Potentially set by pg_upgrade_support functions */
|
2011-01-08 03:25:34 +01:00
|
|
|
Oid binary_upgrade_next_array_pg_type_oid = InvalidOid;
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
Oid binary_upgrade_next_mrng_pg_type_oid = InvalidOid;
|
|
|
|
Oid binary_upgrade_next_mrng_array_pg_type_oid = InvalidOid;
|
2003-01-04 01:46:08 +01:00
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
static void makeRangeConstructors(const char *name, Oid namespace,
|
2019-05-22 19:04:48 +02:00
|
|
|
Oid rangeOid, Oid subtype);
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
static void makeMultirangeConstructors(const char *name, Oid namespace,
|
|
|
|
Oid multirangeOid, Oid rangeOid,
|
2021-06-15 14:59:20 +02:00
|
|
|
Oid rangeArrayOid,
|
|
|
|
Oid *oneArgContructorOid);
|
|
|
|
static void makeMultirangeCasts(const char *name, Oid namespace,
|
|
|
|
Oid multirangeOid, Oid rangeOid,
|
|
|
|
Oid rangeArrayOid, Oid singleArgContructorOid);
|
2003-05-09 00:19:58 +02:00
|
|
|
static Oid findTypeInputFunction(List *procname, Oid typeOid);
|
|
|
|
static Oid findTypeOutputFunction(List *procname, Oid typeOid);
|
|
|
|
static Oid findTypeReceiveFunction(List *procname, Oid typeOid);
|
|
|
|
static Oid findTypeSendFunction(List *procname, Oid typeOid);
|
2006-12-30 22:21:56 +01:00
|
|
|
static Oid findTypeTypmodinFunction(List *procname);
|
|
|
|
static Oid findTypeTypmodoutFunction(List *procname);
|
2004-02-13 00:41:04 +01:00
|
|
|
static Oid findTypeAnalyzeFunction(List *procname, Oid typeOid);
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
static Oid findTypeSubscriptingFunction(List *procname, Oid typeOid);
|
2011-11-21 22:19:53 +01:00
|
|
|
static Oid findRangeSubOpclass(List *opcname, Oid subtype);
|
2011-11-14 18:08:48 +01:00
|
|
|
static Oid findRangeCanonicalFunction(List *procname, Oid typeOid);
|
2011-11-21 22:19:53 +01:00
|
|
|
static Oid findRangeSubtypeDiffFunction(List *procname, Oid subtype);
|
2011-11-14 18:08:48 +01:00
|
|
|
static void validateDomainConstraint(Oid domainoid, char *ccbin);
|
2003-01-04 01:46:08 +01:00
|
|
|
static List *get_rels_with_domain(Oid domainOid, LOCKMODE lockmode);
|
2010-10-25 05:04:37 +02:00
|
|
|
static void checkEnumOwner(HeapTuple tup);
|
2002-12-06 06:00:34 +01:00
|
|
|
static char *domainAddConstraint(Oid domainOid, Oid domainNamespace,
|
2019-05-22 19:04:48 +02:00
|
|
|
Oid baseTypeOid,
|
|
|
|
int typMod, Constraint *constr,
|
|
|
|
const char *domainName, ObjectAddress *constrAddr);
|
2017-01-07 22:02:16 +01:00
|
|
|
static Node *replace_domain_constraint_value(ParseState *pstate,
|
2019-05-22 19:04:48 +02:00
|
|
|
ColumnRef *cref);
|
2020-07-31 23:11:28 +02:00
|
|
|
static void AlterTypeRecurse(Oid typeOid, bool isImplicitArray,
|
|
|
|
HeapTuple tup, Relation catalog,
|
2020-03-06 18:19:29 +01:00
|
|
|
AlterTypeRecurseParams *atparams);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* DefineType
|
2008-07-31 18:27:16 +02:00
|
|
|
* Registers a new base type.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2016-09-06 18:00:00 +02:00
|
|
|
DefineType(ParseState *pstate, List *names, List *parameters)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
|
|
|
char *typeName;
|
|
|
|
Oid typeNamespace;
|
2003-05-09 00:19:58 +02:00
|
|
|
int16 internalLength = -1; /* default: variable-length */
|
2002-04-15 07:22:04 +02:00
|
|
|
List *inputName = NIL;
|
|
|
|
List *outputName = NIL;
|
2003-05-09 00:19:58 +02:00
|
|
|
List *receiveName = NIL;
|
|
|
|
List *sendName = NIL;
|
2006-12-30 22:21:56 +01:00
|
|
|
List *typmodinName = NIL;
|
|
|
|
List *typmodoutName = NIL;
|
2004-02-13 00:41:04 +01:00
|
|
|
List *analyzeName = NIL;
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
List *subscriptName = NIL;
|
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
|
|
|
char category = TYPCATEGORY_USER;
|
2008-07-30 21:35:13 +02:00
|
|
|
bool preferred = false;
|
2002-04-15 07:22:04 +02:00
|
|
|
char delimiter = DEFAULT_TYPDELIM;
|
2008-11-30 20:01:29 +01:00
|
|
|
Oid elemType = InvalidOid;
|
|
|
|
char *defaultValue = NULL;
|
|
|
|
bool byValue = false;
|
2020-03-04 16:34:25 +01:00
|
|
|
char alignment = TYPALIGN_INT; /* default alignment */
|
|
|
|
char storage = TYPSTORAGE_PLAIN; /* default TOAST storage method */
|
2011-02-08 22:04:18 +01:00
|
|
|
Oid collation = InvalidOid;
|
2009-06-11 16:49:15 +02:00
|
|
|
DefElem *likeTypeEl = NULL;
|
|
|
|
DefElem *internalLengthEl = NULL;
|
|
|
|
DefElem *inputNameEl = NULL;
|
|
|
|
DefElem *outputNameEl = NULL;
|
|
|
|
DefElem *receiveNameEl = NULL;
|
|
|
|
DefElem *sendNameEl = NULL;
|
|
|
|
DefElem *typmodinNameEl = NULL;
|
|
|
|
DefElem *typmodoutNameEl = NULL;
|
|
|
|
DefElem *analyzeNameEl = NULL;
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
DefElem *subscriptNameEl = NULL;
|
2009-06-11 16:49:15 +02:00
|
|
|
DefElem *categoryEl = NULL;
|
|
|
|
DefElem *preferredEl = NULL;
|
|
|
|
DefElem *delimiterEl = NULL;
|
|
|
|
DefElem *elemTypeEl = NULL;
|
|
|
|
DefElem *defaultValueEl = NULL;
|
|
|
|
DefElem *byValueEl = NULL;
|
|
|
|
DefElem *alignmentEl = NULL;
|
|
|
|
DefElem *storageEl = NULL;
|
2011-04-10 17:42:00 +02:00
|
|
|
DefElem *collatableEl = NULL;
|
2002-04-15 07:22:04 +02:00
|
|
|
Oid inputOid;
|
|
|
|
Oid outputOid;
|
2003-05-09 00:19:58 +02:00
|
|
|
Oid receiveOid = InvalidOid;
|
|
|
|
Oid sendOid = InvalidOid;
|
2006-12-30 22:21:56 +01:00
|
|
|
Oid typmodinOid = InvalidOid;
|
|
|
|
Oid typmodoutOid = InvalidOid;
|
2004-02-13 00:41:04 +01:00
|
|
|
Oid analyzeOid = InvalidOid;
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
Oid subscriptOid = InvalidOid;
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
char *array_type;
|
2007-11-15 22:14:46 +01:00
|
|
|
Oid array_oid;
|
2002-04-15 07:22:04 +02:00
|
|
|
Oid typoid;
|
2008-11-30 20:01:29 +01:00
|
|
|
ListCell *pl;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2008-07-31 18:27:16 +02:00
|
|
|
/*
|
|
|
|
* As of Postgres 8.4, we require superuser privilege to create a base
|
|
|
|
* type. This is simple paranoia: there are too many ways to mess up the
|
|
|
|
* system with an incorrect type definition (for instance, representation
|
2009-06-11 16:49:15 +02:00
|
|
|
* parameters that don't match what the C code expects). In practice it
|
|
|
|
* takes superuser privilege to create the I/O functions, and so the
|
2008-07-31 18:27:16 +02:00
|
|
|
* former requirement that you own the I/O functions pretty much forced
|
|
|
|
* superuserness anyway. We're just making doubly sure here.
|
|
|
|
*
|
|
|
|
* XXX re-enable NOT_USED code sections below if you remove this test.
|
|
|
|
*/
|
|
|
|
if (!superuser())
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
|
|
|
errmsg("must be superuser to create a base type")));
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/* Convert list of names to a name and namespace */
|
|
|
|
typeNamespace = QualifiedNameGetCreationNamespace(names, &typeName);
|
|
|
|
|
2008-07-31 18:27:16 +02:00
|
|
|
#ifdef NOT_USED
|
|
|
|
/* XXX this is unnecessary given the superuser check above */
|
2002-04-27 05:45:03 +02:00
|
|
|
/* Check we have creation rights in target namespace */
|
|
|
|
aclresult = pg_namespace_aclcheck(typeNamespace, GetUserId(), ACL_CREATE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_SCHEMA,
|
2003-08-01 02:15:26 +02:00
|
|
|
get_namespace_name(typeNamespace));
|
2008-07-31 18:27:16 +02:00
|
|
|
#endif
|
2002-04-27 05:45:03 +02:00
|
|
|
|
2006-02-28 23:37:27 +01:00
|
|
|
/*
|
2020-03-05 21:48:56 +01:00
|
|
|
* Look to see if type already exists.
|
2006-02-28 23:37:27 +01:00
|
|
|
*/
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
typoid = GetSysCacheOid2(TYPENAMENSP, Anum_pg_type_oid,
|
2010-02-14 19:42:19 +01:00
|
|
|
CStringGetDatum(typeName),
|
|
|
|
ObjectIdGetDatum(typeNamespace));
|
2007-05-12 02:55:00 +02:00
|
|
|
|
|
|
|
/*
|
2007-11-15 22:14:46 +01:00
|
|
|
* If it's not a shell, see if it's an autogenerated array type, and if so
|
|
|
|
* rename it out of the way.
|
2007-05-12 02:55:00 +02:00
|
|
|
*/
|
|
|
|
if (OidIsValid(typoid) && get_typisdefined(typoid))
|
|
|
|
{
|
|
|
|
if (moveArrayTypeName(typoid, typeName, typeNamespace))
|
|
|
|
typoid = InvalidOid;
|
2020-03-05 21:48:56 +01:00
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
|
|
|
errmsg("type \"%s\" already exists", typeName)));
|
2007-05-12 02:55:00 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2020-03-05 21:48:56 +01:00
|
|
|
* If this command is a parameterless CREATE TYPE, then we're just here to
|
|
|
|
* make a shell type, so do that (or fail if there already is a shell).
|
2007-05-12 02:55:00 +02:00
|
|
|
*/
|
2020-03-05 21:48:56 +01:00
|
|
|
if (parameters == NIL)
|
2006-02-28 23:37:27 +01:00
|
|
|
{
|
2020-03-05 21:48:56 +01:00
|
|
|
if (OidIsValid(typoid))
|
2006-02-28 23:37:27 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
|
|
|
errmsg("type \"%s\" already exists", typeName)));
|
2020-03-05 21:48:56 +01:00
|
|
|
|
|
|
|
address = TypeShellMake(typeName, typeNamespace, GetUserId());
|
|
|
|
return address;
|
2006-02-28 23:37:27 +01:00
|
|
|
}
|
|
|
|
|
2020-03-05 21:48:56 +01:00
|
|
|
/*
|
|
|
|
* Otherwise, we must already have a shell type, since there is no other
|
|
|
|
* way that the I/O functions could have been created.
|
|
|
|
*/
|
|
|
|
if (!OidIsValid(typoid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
|
|
|
errmsg("type \"%s\" does not exist", typeName),
|
|
|
|
errhint("Create the type as a shell type, then create its I/O functions, then do a full CREATE TYPE.")));
|
|
|
|
|
2008-11-30 20:01:29 +01:00
|
|
|
/* Extract the parameters from the parameter list */
|
2002-04-15 07:22:04 +02:00
|
|
|
foreach(pl, parameters)
|
|
|
|
{
|
|
|
|
DefElem *defel = (DefElem *) lfirst(pl);
|
2008-11-30 20:01:29 +01:00
|
|
|
DefElem **defelp;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
if (strcmp(defel->defname, "like") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &likeTypeEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "internallength") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &internalLengthEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "input") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &inputNameEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "output") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &outputNameEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "receive") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &receiveNameEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "send") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &sendNameEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "typmod_in") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &typmodinNameEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "typmod_out") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &typmodoutNameEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "analyze") == 0 ||
|
|
|
|
strcmp(defel->defname, "analyse") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &analyzeNameEl;
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
else if (strcmp(defel->defname, "subscript") == 0)
|
|
|
|
defelp = &subscriptNameEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "category") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &categoryEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "preferred") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &preferredEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "delimiter") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &delimiterEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "element") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &elemTypeEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "default") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &defaultValueEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "passedbyvalue") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &byValueEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "alignment") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &alignmentEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "storage") == 0)
|
2008-11-30 20:01:29 +01:00
|
|
|
defelp = &storageEl;
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "collatable") == 0)
|
2011-02-08 22:04:18 +01:00
|
|
|
defelp = &collatableEl;
|
2002-04-15 07:22:04 +02:00
|
|
|
else
|
2008-11-30 20:01:29 +01:00
|
|
|
{
|
|
|
|
/* WARNING, not ERROR, for historical backwards-compatibility */
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(WARNING,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("type attribute \"%s\" not recognized",
|
2016-09-06 18:00:00 +02:00
|
|
|
defel->defname),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2008-11-30 20:01:29 +01:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (*defelp != NULL)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2008-11-30 20:01:29 +01:00
|
|
|
*defelp = defel;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2009-06-11 16:49:15 +02:00
|
|
|
* Now interpret the options; we do this separately so that LIKE can be
|
|
|
|
* overridden by other options regardless of the ordering in the parameter
|
|
|
|
* list.
|
2008-11-30 20:01:29 +01:00
|
|
|
*/
|
|
|
|
if (likeTypeEl)
|
|
|
|
{
|
2009-06-11 16:49:15 +02:00
|
|
|
Type likeType;
|
2008-11-30 20:01:29 +01:00
|
|
|
Form_pg_type likeForm;
|
|
|
|
|
Remove collation information from TypeName, where it does not belong.
The initial collations patch treated a COLLATE spec as part of a TypeName,
following what can only be described as brain fade on the part of the SQL
committee. It's a lot more reasonable to treat COLLATE as a syntactically
separate object, so that it can be added in only the productions where it
actually belongs, rather than needing to reject it in a boatload of places
where it doesn't belong (something the original patch mostly failed to do).
In addition this change lets us meet the spec's requirement to allow
COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly
behavior for constructs such as "foo::type COLLATE collation".
To do this, pull collation information out of TypeName and put it in
ColumnDef instead, thus reverting most of the collation-related changes in
parse_type.c's API. I made one additional structural change, which was to
use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd
nodes. This provides enough room to get rid of the "transform" wart in
AlterTableCmd too, since the ColumnDef can carry the USING expression
easily enough.
Also fix some other minor bugs that have crept in in the same areas,
like failure to copy recently-added fields of ColumnDef in copyfuncs.c.
While at it, document the formerly secret ability to specify a collation
in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and
ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about
what the default collation selection will be when COLLATE is omitted.
BTW, the three-parameter form of format_type() should go away too,
since it just contributes to the confusion in this area; but I'll do
that in a separate patch.
2011-03-10 04:38:52 +01:00
|
|
|
likeType = typenameType(NULL, defGetTypeName(likeTypeEl), NULL);
|
2008-11-30 20:01:29 +01:00
|
|
|
likeForm = (Form_pg_type) GETSTRUCT(likeType);
|
|
|
|
internalLength = likeForm->typlen;
|
|
|
|
byValue = likeForm->typbyval;
|
|
|
|
alignment = likeForm->typalign;
|
|
|
|
storage = likeForm->typstorage;
|
|
|
|
ReleaseSysCache(likeType);
|
|
|
|
}
|
|
|
|
if (internalLengthEl)
|
|
|
|
internalLength = defGetTypeLength(internalLengthEl);
|
|
|
|
if (inputNameEl)
|
|
|
|
inputName = defGetQualifiedName(inputNameEl);
|
|
|
|
if (outputNameEl)
|
|
|
|
outputName = defGetQualifiedName(outputNameEl);
|
|
|
|
if (receiveNameEl)
|
|
|
|
receiveName = defGetQualifiedName(receiveNameEl);
|
|
|
|
if (sendNameEl)
|
|
|
|
sendName = defGetQualifiedName(sendNameEl);
|
|
|
|
if (typmodinNameEl)
|
|
|
|
typmodinName = defGetQualifiedName(typmodinNameEl);
|
|
|
|
if (typmodoutNameEl)
|
|
|
|
typmodoutName = defGetQualifiedName(typmodoutNameEl);
|
|
|
|
if (analyzeNameEl)
|
|
|
|
analyzeName = defGetQualifiedName(analyzeNameEl);
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
if (subscriptNameEl)
|
|
|
|
subscriptName = defGetQualifiedName(subscriptNameEl);
|
2008-11-30 20:01:29 +01:00
|
|
|
if (categoryEl)
|
|
|
|
{
|
|
|
|
char *p = defGetString(categoryEl);
|
|
|
|
|
|
|
|
category = p[0];
|
|
|
|
/* restrict to non-control ASCII */
|
|
|
|
if (category < 32 || category > 126)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("invalid type category \"%s\": must be simple ASCII",
|
|
|
|
p)));
|
2008-11-30 20:01:29 +01:00
|
|
|
}
|
|
|
|
if (preferredEl)
|
|
|
|
preferred = defGetBoolean(preferredEl);
|
|
|
|
if (delimiterEl)
|
|
|
|
{
|
|
|
|
char *p = defGetString(delimiterEl);
|
|
|
|
|
|
|
|
delimiter = p[0];
|
|
|
|
/* XXX shouldn't we restrict the delimiter? */
|
|
|
|
}
|
|
|
|
if (elemTypeEl)
|
|
|
|
{
|
2010-10-25 20:40:46 +02:00
|
|
|
elemType = typenameTypeId(NULL, defGetTypeName(elemTypeEl));
|
2008-11-30 20:01:29 +01:00
|
|
|
/* disallow arrays of pseudotypes */
|
|
|
|
if (get_typtype(elemType) == TYPTYPE_PSEUDO)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DATATYPE_MISMATCH),
|
|
|
|
errmsg("array element type cannot be %s",
|
|
|
|
format_type_be(elemType))));
|
|
|
|
}
|
|
|
|
if (defaultValueEl)
|
|
|
|
defaultValue = defGetString(defaultValueEl);
|
|
|
|
if (byValueEl)
|
|
|
|
byValue = defGetBoolean(byValueEl);
|
|
|
|
if (alignmentEl)
|
|
|
|
{
|
|
|
|
char *a = defGetString(alignmentEl);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Note: if argument was an unquoted identifier, parser will have
|
2009-06-11 16:49:15 +02:00
|
|
|
* applied translations to it, so be prepared to recognize translated
|
|
|
|
* type names as well as the nominal form.
|
2008-11-30 20:01:29 +01:00
|
|
|
*/
|
|
|
|
if (pg_strcasecmp(a, "double") == 0 ||
|
|
|
|
pg_strcasecmp(a, "float8") == 0 ||
|
|
|
|
pg_strcasecmp(a, "pg_catalog.float8") == 0)
|
2020-03-04 16:34:25 +01:00
|
|
|
alignment = TYPALIGN_DOUBLE;
|
2008-11-30 20:01:29 +01:00
|
|
|
else if (pg_strcasecmp(a, "int4") == 0 ||
|
|
|
|
pg_strcasecmp(a, "pg_catalog.int4") == 0)
|
2020-03-04 16:34:25 +01:00
|
|
|
alignment = TYPALIGN_INT;
|
2008-11-30 20:01:29 +01:00
|
|
|
else if (pg_strcasecmp(a, "int2") == 0 ||
|
|
|
|
pg_strcasecmp(a, "pg_catalog.int2") == 0)
|
2020-03-04 16:34:25 +01:00
|
|
|
alignment = TYPALIGN_SHORT;
|
2008-11-30 20:01:29 +01:00
|
|
|
else if (pg_strcasecmp(a, "char") == 0 ||
|
|
|
|
pg_strcasecmp(a, "pg_catalog.bpchar") == 0)
|
2020-03-04 16:34:25 +01:00
|
|
|
alignment = TYPALIGN_CHAR;
|
2008-11-30 20:01:29 +01:00
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("alignment \"%s\" not recognized", a)));
|
|
|
|
}
|
|
|
|
if (storageEl)
|
|
|
|
{
|
|
|
|
char *a = defGetString(storageEl);
|
|
|
|
|
|
|
|
if (pg_strcasecmp(a, "plain") == 0)
|
2020-03-04 16:34:25 +01:00
|
|
|
storage = TYPSTORAGE_PLAIN;
|
2008-11-30 20:01:29 +01:00
|
|
|
else if (pg_strcasecmp(a, "external") == 0)
|
2020-03-04 16:34:25 +01:00
|
|
|
storage = TYPSTORAGE_EXTERNAL;
|
2008-11-30 20:01:29 +01:00
|
|
|
else if (pg_strcasecmp(a, "extended") == 0)
|
2020-03-04 16:34:25 +01:00
|
|
|
storage = TYPSTORAGE_EXTENDED;
|
2008-11-30 20:01:29 +01:00
|
|
|
else if (pg_strcasecmp(a, "main") == 0)
|
2020-03-04 16:34:25 +01:00
|
|
|
storage = TYPSTORAGE_MAIN;
|
2008-11-30 20:01:29 +01:00
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("storage \"%s\" not recognized", a)));
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
2011-02-08 22:04:18 +01:00
|
|
|
if (collatableEl)
|
|
|
|
collation = defGetBoolean(collatableEl) ? DEFAULT_COLLATION_OID : InvalidOid;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* make sure we have our required definitions
|
|
|
|
*/
|
|
|
|
if (inputName == NIL)
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type input function must be specified")));
|
2002-04-15 07:22:04 +02:00
|
|
|
if (outputName == NIL)
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type output function must be specified")));
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2006-12-30 22:21:56 +01:00
|
|
|
if (typmodinName == NIL && typmodoutName != NIL)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type modifier output function is useless without a type modifier input function")));
|
|
|
|
|
2002-08-22 02:01:51 +02:00
|
|
|
/*
|
|
|
|
* Convert I/O proc names to OIDs
|
|
|
|
*/
|
2003-05-09 00:19:58 +02:00
|
|
|
inputOid = findTypeInputFunction(inputName, typoid);
|
|
|
|
outputOid = findTypeOutputFunction(outputName, typoid);
|
|
|
|
if (receiveName)
|
|
|
|
receiveOid = findTypeReceiveFunction(receiveName, typoid);
|
|
|
|
if (sendName)
|
|
|
|
sendOid = findTypeSendFunction(sendName, typoid);
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2006-12-30 22:21:56 +01:00
|
|
|
/*
|
|
|
|
* Convert typmodin/out function proc names to OIDs.
|
|
|
|
*/
|
|
|
|
if (typmodinName)
|
|
|
|
typmodinOid = findTypeTypmodinFunction(typmodinName);
|
|
|
|
if (typmodoutName)
|
|
|
|
typmodoutOid = findTypeTypmodoutFunction(typmodoutName);
|
|
|
|
|
2004-02-13 00:41:04 +01:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Convert analysis function proc name to an OID. If no analysis function
|
|
|
|
* is specified, we'll use zero to select the built-in default algorithm.
|
2004-02-13 00:41:04 +01:00
|
|
|
*/
|
|
|
|
if (analyzeName)
|
|
|
|
analyzeOid = findTypeAnalyzeFunction(analyzeName, typoid);
|
|
|
|
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
/*
|
|
|
|
* Likewise look up the subscripting procedure if any. If it is not
|
|
|
|
* specified, but a typelem is specified, allow that if
|
|
|
|
* raw_array_subscript_handler can be used. (This is for backwards
|
|
|
|
* compatibility; maybe someday we should throw an error instead.)
|
|
|
|
*/
|
|
|
|
if (subscriptName)
|
|
|
|
subscriptOid = findTypeSubscriptingFunction(subscriptName, typoid);
|
|
|
|
else if (OidIsValid(elemType))
|
|
|
|
{
|
|
|
|
if (internalLength > 0 && !byValue && get_typlen(elemType) > 0)
|
|
|
|
subscriptOid = F_RAW_ARRAY_SUBSCRIPT_HANDLER;
|
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("element type cannot be specified without a valid subscripting procedure")));
|
|
|
|
}
|
|
|
|
|
2006-01-13 19:06:45 +01:00
|
|
|
/*
|
2014-05-06 18:12:18 +02:00
|
|
|
* Check permissions on functions. We choose to require the creator/owner
|
|
|
|
* of a type to also own the underlying functions. Since creating a type
|
2006-01-13 19:06:45 +01:00
|
|
|
* is tantamount to granting public execute access on the functions, the
|
2006-10-04 02:30:14 +02:00
|
|
|
* minimum sane check would be for execute-with-grant-option. But we
|
|
|
|
* don't have a way to make the type go away if the grant option is
|
|
|
|
* revoked, so ownership seems better.
|
2020-03-06 18:19:29 +01:00
|
|
|
*
|
|
|
|
* XXX For now, this is all unnecessary given the superuser check above.
|
|
|
|
* If we ever relax that, these calls likely should be moved into
|
|
|
|
* findTypeInputFunction et al, where they could be shared by AlterType.
|
2006-01-13 19:06:45 +01:00
|
|
|
*/
|
2008-07-31 18:27:16 +02:00
|
|
|
#ifdef NOT_USED
|
2006-01-13 19:06:45 +01:00
|
|
|
if (inputOid && !pg_proc_ownercheck(inputOid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION,
|
2006-01-13 19:06:45 +01:00
|
|
|
NameListToString(inputName));
|
|
|
|
if (outputOid && !pg_proc_ownercheck(outputOid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION,
|
2006-01-13 19:06:45 +01:00
|
|
|
NameListToString(outputName));
|
|
|
|
if (receiveOid && !pg_proc_ownercheck(receiveOid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION,
|
2006-01-13 19:06:45 +01:00
|
|
|
NameListToString(receiveName));
|
|
|
|
if (sendOid && !pg_proc_ownercheck(sendOid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION,
|
2006-01-13 19:06:45 +01:00
|
|
|
NameListToString(sendName));
|
2006-12-30 22:21:56 +01:00
|
|
|
if (typmodinOid && !pg_proc_ownercheck(typmodinOid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION,
|
2006-12-30 22:21:56 +01:00
|
|
|
NameListToString(typmodinName));
|
|
|
|
if (typmodoutOid && !pg_proc_ownercheck(typmodoutOid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION,
|
2006-12-30 22:21:56 +01:00
|
|
|
NameListToString(typmodoutName));
|
2006-01-13 19:06:45 +01:00
|
|
|
if (analyzeOid && !pg_proc_ownercheck(analyzeOid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION,
|
2006-01-13 19:06:45 +01:00
|
|
|
NameListToString(analyzeName));
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
if (subscriptOid && !pg_proc_ownercheck(subscriptOid, GetUserId()))
|
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION,
|
|
|
|
NameListToString(subscriptName));
|
2008-07-31 18:27:16 +02:00
|
|
|
#endif
|
2006-01-13 19:06:45 +01:00
|
|
|
|
2014-11-05 17:44:06 +01:00
|
|
|
/*
|
|
|
|
* OK, we're done checking, time to make the type. We must assign the
|
|
|
|
* array type OID ahead of calling TypeCreate, since the base type and
|
|
|
|
* array type each refer to the other.
|
|
|
|
*/
|
2009-12-24 23:09:24 +01:00
|
|
|
array_oid = AssignTypeArrayOid();
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
|
|
|
* now have TypeCreate do all the real work.
|
Improve handling of domains over arrays.
This patch eliminates various bizarre behaviors caused by sloppy thinking
about the difference between a domain type and its underlying array type.
In particular, the operation of updating one element of such an array
has to be considered as yielding a value of the underlying array type,
*not* a value of the domain, because there's no assurance that the
domain's CHECK constraints are still satisfied. If we're intending to
store the result back into a domain column, we have to re-cast to the
domain type so that constraints are re-checked.
For similar reasons, such a domain can't be blindly matched to an ANYARRAY
polymorphic parameter, because the polymorphic function is likely to apply
array-ish operations that could invalidate the domain constraints. For the
moment, we just forbid such matching. We might later wish to insert an
automatic downcast to the underlying array type, but such a change should
also change matching of domains to ANYELEMENT for consistency.
To ensure that all such logic is rechecked, this patch removes the original
hack of setting a domain's pg_type.typelem field to match its base type;
the typelem will always be zero instead. In those places where it's really
okay to look through the domain type with no other logic changes, use the
newly added get_base_element_type function in place of get_element_type.
catversion bumped due to change in pg_type contents.
Per bug #5717 from Richard Huxton and subsequent discussion.
2010-10-21 22:07:17 +02:00
|
|
|
*
|
|
|
|
* Note: the pg_type.oid is stored in user tables as array elements (base
|
2014-05-06 18:12:18 +02:00
|
|
|
* types) in ArrayType and in composite types in DatumTupleFields. This
|
Improve handling of domains over arrays.
This patch eliminates various bizarre behaviors caused by sloppy thinking
about the difference between a domain type and its underlying array type.
In particular, the operation of updating one element of such an array
has to be considered as yielding a value of the underlying array type,
*not* a value of the domain, because there's no assurance that the
domain's CHECK constraints are still satisfied. If we're intending to
store the result back into a domain column, we have to re-cast to the
domain type so that constraints are re-checked.
For similar reasons, such a domain can't be blindly matched to an ANYARRAY
polymorphic parameter, because the polymorphic function is likely to apply
array-ish operations that could invalidate the domain constraints. For the
moment, we just forbid such matching. We might later wish to insert an
automatic downcast to the underlying array type, but such a change should
also change matching of domains to ANYELEMENT for consistency.
To ensure that all such logic is rechecked, this patch removes the original
hack of setting a domain's pg_type.typelem field to match its base type;
the typelem will always be zero instead. In those places where it's really
okay to look through the domain type with no other logic changes, use the
newly added get_base_element_type function in place of get_element_type.
catversion bumped due to change in pg_type contents.
Per bug #5717 from Richard Huxton and subsequent discussion.
2010-10-21 22:07:17 +02:00
|
|
|
* oid must be preserved by binary upgrades.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
address =
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
TypeCreate(InvalidOid, /* no predetermined type OID */
|
|
|
|
typeName, /* type name */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
typeNamespace, /* namespace */
|
2002-09-04 22:31:48 +02:00
|
|
|
InvalidOid, /* relation oid (n/a here) */
|
|
|
|
0, /* relation kind (ditto) */
|
2009-06-11 16:49:15 +02:00
|
|
|
GetUserId(), /* owner's ID */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
internalLength, /* internal size */
|
2007-11-15 22:14:46 +01:00
|
|
|
TYPTYPE_BASE, /* type-type (base type) */
|
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
|
|
|
category, /* type-category */
|
|
|
|
preferred, /* is it a preferred type? */
|
2002-09-04 22:31:48 +02:00
|
|
|
delimiter, /* array element delimiter */
|
|
|
|
inputOid, /* input procedure */
|
|
|
|
outputOid, /* output procedure */
|
2003-05-09 00:19:58 +02:00
|
|
|
receiveOid, /* receive procedure */
|
|
|
|
sendOid, /* send procedure */
|
2006-12-30 22:21:56 +01:00
|
|
|
typmodinOid, /* typmodin procedure */
|
2007-11-15 22:14:46 +01:00
|
|
|
typmodoutOid, /* typmodout procedure */
|
2004-02-13 00:41:04 +01:00
|
|
|
analyzeOid, /* analyze procedure */
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
subscriptOid, /* subscript procedure */
|
2002-09-04 22:31:48 +02:00
|
|
|
elemType, /* element type ID */
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
false, /* this is not an implicit array type */
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
array_oid, /* array type we are about to create */
|
2002-09-04 22:31:48 +02:00
|
|
|
InvalidOid, /* base type ID (only for domains) */
|
2002-04-15 07:22:04 +02:00
|
|
|
defaultValue, /* default type value */
|
2002-09-04 22:31:48 +02:00
|
|
|
NULL, /* no binary form available */
|
|
|
|
byValue, /* passed by value */
|
|
|
|
alignment, /* required alignment */
|
|
|
|
storage, /* TOAST strategy */
|
|
|
|
-1, /* typMod (Domains only) */
|
|
|
|
0, /* Array Dimensions of typbasetype */
|
2011-02-08 22:04:18 +01:00
|
|
|
false, /* Type NOT NULL */
|
2011-04-22 23:43:18 +02:00
|
|
|
collation); /* type's collation */
|
2015-04-22 21:23:02 +02:00
|
|
|
Assert(typoid == address.objectId);
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/*
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
* Create the array type that goes with it.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
array_type = makeArrayTypeName(typeName, typeNamespace);
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2020-03-04 16:34:25 +01:00
|
|
|
/* alignment must be TYPALIGN_INT or TYPALIGN_DOUBLE for arrays */
|
|
|
|
alignment = (alignment == TYPALIGN_DOUBLE) ? TYPALIGN_DOUBLE : TYPALIGN_INT;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2014-08-25 21:32:26 +02:00
|
|
|
TypeCreate(array_oid, /* force assignment of this type OID */
|
|
|
|
array_type, /* type name */
|
|
|
|
typeNamespace, /* namespace */
|
|
|
|
InvalidOid, /* relation oid (n/a here) */
|
2015-05-24 03:35:49 +02:00
|
|
|
0, /* relation kind (ditto) */
|
|
|
|
GetUserId(), /* owner's ID */
|
|
|
|
-1, /* internal size (always varlena) */
|
2014-08-25 21:32:26 +02:00
|
|
|
TYPTYPE_BASE, /* type-type (base type) */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
TYPCATEGORY_ARRAY, /* type-category (array) */
|
2015-05-24 03:35:49 +02:00
|
|
|
false, /* array types are never preferred */
|
2014-08-25 21:32:26 +02:00
|
|
|
delimiter, /* array element delimiter */
|
|
|
|
F_ARRAY_IN, /* input procedure */
|
2015-05-24 03:35:49 +02:00
|
|
|
F_ARRAY_OUT, /* output procedure */
|
2014-08-25 21:32:26 +02:00
|
|
|
F_ARRAY_RECV, /* receive procedure */
|
|
|
|
F_ARRAY_SEND, /* send procedure */
|
2015-05-24 03:35:49 +02:00
|
|
|
typmodinOid, /* typmodin procedure */
|
2014-08-25 21:32:26 +02:00
|
|
|
typmodoutOid, /* typmodout procedure */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
F_ARRAY_TYPANALYZE, /* analyze procedure */
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
F_ARRAY_SUBSCRIPT_HANDLER, /* array subscript procedure */
|
2015-05-24 03:35:49 +02:00
|
|
|
typoid, /* element type ID */
|
|
|
|
true, /* yes this is an array type */
|
2014-08-25 21:32:26 +02:00
|
|
|
InvalidOid, /* no further array type */
|
|
|
|
InvalidOid, /* base type ID */
|
2015-05-24 03:35:49 +02:00
|
|
|
NULL, /* never a default type value */
|
|
|
|
NULL, /* binary default isn't sent either */
|
|
|
|
false, /* never passed by value */
|
2014-08-25 21:32:26 +02:00
|
|
|
alignment, /* see above */
|
2020-03-04 16:34:25 +01:00
|
|
|
TYPSTORAGE_EXTENDED, /* ARRAY is always toastable */
|
2015-05-24 03:35:49 +02:00
|
|
|
-1, /* typMod (Domains only) */
|
|
|
|
0, /* Array dimensions of typbasetype */
|
|
|
|
false, /* Type NOT NULL */
|
2014-08-25 21:32:26 +02:00
|
|
|
collation); /* type's collation */
|
2002-04-15 07:22:04 +02:00
|
|
|
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
pfree(array_type);
|
2012-12-24 00:25:03 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
2002-07-12 20:43:19 +02:00
|
|
|
/*
|
|
|
|
* Guts of type deletion.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
RemoveTypeById(Oid typeOid)
|
|
|
|
{
|
|
|
|
Relation relation;
|
|
|
|
HeapTuple tup;
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
relation = table_open(TypeRelationId, RowExclusiveLock);
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(typeOid));
|
2002-04-15 07:22:04 +02:00
|
|
|
if (!HeapTupleIsValid(tup))
|
2003-07-20 23:56:35 +02:00
|
|
|
elog(ERROR, "cache lookup failed for type %u", typeOid);
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2017-02-01 22:13:30 +01:00
|
|
|
CatalogTupleDelete(relation, &tup->t_self);
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2007-04-02 05:49:42 +02:00
|
|
|
/*
|
2007-11-15 22:14:46 +01:00
|
|
|
* If it is an enum, delete the pg_enum entries too; we don't bother with
|
|
|
|
* making dependency entries for those, so it has to be done "by hand"
|
|
|
|
* here.
|
2007-04-02 05:49:42 +02:00
|
|
|
*/
|
|
|
|
if (((Form_pg_type) GETSTRUCT(tup))->typtype == TYPTYPE_ENUM)
|
|
|
|
EnumValuesDelete(typeOid);
|
|
|
|
|
2011-11-03 12:16:28 +01:00
|
|
|
/*
|
2011-11-21 05:50:27 +01:00
|
|
|
* If it is a range type, delete the pg_range entry too; we don't bother
|
|
|
|
* with making a dependency entry for that, so it has to be done "by hand"
|
|
|
|
* here.
|
2011-11-03 12:16:28 +01:00
|
|
|
*/
|
|
|
|
if (((Form_pg_type) GETSTRUCT(tup))->typtype == TYPTYPE_RANGE)
|
|
|
|
RangeDelete(typeOid);
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
ReleaseSysCache(tup);
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(relation, RowExclusiveLock);
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* DefineDomain
|
|
|
|
* Registers a new domain.
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2002-04-15 07:22:04 +02:00
|
|
|
DefineDomain(CreateDomainStmt *stmt)
|
|
|
|
{
|
|
|
|
char *domainName;
|
Support arrays over domains.
Allowing arrays with a domain type as their element type was left un-done
in the original domain patch, but not for any very good reason. This
omission leads to such surprising results as array_agg() not working on
a domain column, because the parser can't identify a suitable output type
for the polymorphic aggregate.
In order to fix this, first clean up the APIs of coerce_to_domain() and
some internal functions in parse_coerce.c so that we consistently pass
around a CoercionContext along with CoercionForm. Previously, we sometimes
passed an "isExplicit" boolean flag instead, which is strictly less
information; and coerce_to_domain() didn't even get that, but instead had
to reverse-engineer isExplicit from CoercionForm. That's contrary to the
documentation in primnodes.h that says that CoercionForm only affects
display and not semantics. I don't think this change fixes any live bugs,
but it makes things more consistent. The main reason for doing it though
is that now build_coercion_expression() receives ccontext, which it needs
in order to be able to recursively invoke coerce_to_target_type().
Next, reimplement ArrayCoerceExpr so that the node does not directly know
any details of what has to be done to the individual array elements while
performing the array coercion. Instead, the per-element processing is
represented by a sub-expression whose input is a source array element and
whose output is a target array element. This simplifies life in
parse_coerce.c, because it can build that sub-expression by a recursive
invocation of coerce_to_target_type(). The executor now handles the
per-element processing as a compiled expression instead of hard-wired code.
The main advantage of this is that we can use a single ArrayCoerceExpr to
handle as many as three successive steps per element: base type conversion,
typmod coercion, and domain constraint checking. The old code used two
stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty
inefficient, and adding yet another array deconstruction to do domain
constraint checking seemed very unappetizing.
In the case where we just need a single, very simple coercion function,
doing this straightforwardly leads to a noticeable increase in the
per-array-element runtime cost. Hence, add an additional shortcut evalfunc
in execExprInterp.c that skips unnecessary overhead for that specific form
of expression. The runtime speed of simple cases is within 1% or so of
where it was before, while cases that previously required two levels of
array processing are significantly faster.
Finally, create an implicit array type for every domain type, as we do for
base types, enums, etc. Everything except the array-coercion case seems
to just work without further effort.
Tom Lane, reviewed by Andrew Dunstan
Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
2017-09-30 19:40:56 +02:00
|
|
|
char *domainArrayName;
|
2002-04-15 07:22:04 +02:00
|
|
|
Oid domainNamespace;
|
2002-04-27 05:45:03 +02:00
|
|
|
AclResult aclresult;
|
2002-04-15 07:22:04 +02:00
|
|
|
int16 internalLength;
|
|
|
|
Oid inputProcedure;
|
|
|
|
Oid outputProcedure;
|
2003-05-09 00:19:58 +02:00
|
|
|
Oid receiveProcedure;
|
|
|
|
Oid sendProcedure;
|
2004-02-13 00:41:04 +01:00
|
|
|
Oid analyzeProcedure;
|
2002-04-15 07:22:04 +02:00
|
|
|
bool byValue;
|
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
|
|
|
char category;
|
2002-04-15 07:22:04 +02:00
|
|
|
char delimiter;
|
|
|
|
char alignment;
|
|
|
|
char storage;
|
|
|
|
char typtype;
|
|
|
|
Datum datum;
|
|
|
|
bool isnull;
|
|
|
|
char *defaultValue = NULL;
|
|
|
|
char *defaultValueBin = NULL;
|
2007-06-20 20:15:49 +02:00
|
|
|
bool saw_default = false;
|
2002-04-15 07:22:04 +02:00
|
|
|
bool typNotNull = false;
|
2002-07-17 00:12:20 +02:00
|
|
|
bool nullDefined = false;
|
2009-07-16 08:33:46 +02:00
|
|
|
int32 typNDims = list_length(stmt->typeName->arrayBounds);
|
2002-04-15 07:22:04 +02:00
|
|
|
HeapTuple typeTup;
|
|
|
|
List *schema = stmt->constraints;
|
2004-05-26 06:41:50 +02:00
|
|
|
ListCell *listptr;
|
2002-07-12 20:43:19 +02:00
|
|
|
Oid basetypeoid;
|
2007-05-12 02:55:00 +02:00
|
|
|
Oid old_type_oid;
|
Remove collation information from TypeName, where it does not belong.
The initial collations patch treated a COLLATE spec as part of a TypeName,
following what can only be described as brain fade on the part of the SQL
committee. It's a lot more reasonable to treat COLLATE as a syntactically
separate object, so that it can be added in only the productions where it
actually belongs, rather than needing to reject it in a boatload of places
where it doesn't belong (something the original patch mostly failed to do).
In addition this change lets us meet the spec's requirement to allow
COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly
behavior for constructs such as "foo::type COLLATE collation".
To do this, pull collation information out of TypeName and put it in
ColumnDef instead, thus reverting most of the collation-related changes in
parse_type.c's API. I made one additional structural change, which was to
use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd
nodes. This provides enough room to get rid of the "transform" wart in
AlterTableCmd too, since the ColumnDef can carry the USING expression
easily enough.
Also fix some other minor bugs that have crept in in the same areas,
like failure to copy recently-added fields of ColumnDef in copyfuncs.c.
While at it, document the formerly secret ability to specify a collation
in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and
ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about
what the default collation selection will be when COLLATE is omitted.
BTW, the three-parameter form of format_type() should go away too,
since it just contributes to the confusion in this area; but I'll do
that in a separate patch.
2011-03-10 04:38:52 +01:00
|
|
|
Oid domaincoll;
|
Support arrays over domains.
Allowing arrays with a domain type as their element type was left un-done
in the original domain patch, but not for any very good reason. This
omission leads to such surprising results as array_agg() not working on
a domain column, because the parser can't identify a suitable output type
for the polymorphic aggregate.
In order to fix this, first clean up the APIs of coerce_to_domain() and
some internal functions in parse_coerce.c so that we consistently pass
around a CoercionContext along with CoercionForm. Previously, we sometimes
passed an "isExplicit" boolean flag instead, which is strictly less
information; and coerce_to_domain() didn't even get that, but instead had
to reverse-engineer isExplicit from CoercionForm. That's contrary to the
documentation in primnodes.h that says that CoercionForm only affects
display and not semantics. I don't think this change fixes any live bugs,
but it makes things more consistent. The main reason for doing it though
is that now build_coercion_expression() receives ccontext, which it needs
in order to be able to recursively invoke coerce_to_target_type().
Next, reimplement ArrayCoerceExpr so that the node does not directly know
any details of what has to be done to the individual array elements while
performing the array coercion. Instead, the per-element processing is
represented by a sub-expression whose input is a source array element and
whose output is a target array element. This simplifies life in
parse_coerce.c, because it can build that sub-expression by a recursive
invocation of coerce_to_target_type(). The executor now handles the
per-element processing as a compiled expression instead of hard-wired code.
The main advantage of this is that we can use a single ArrayCoerceExpr to
handle as many as three successive steps per element: base type conversion,
typmod coercion, and domain constraint checking. The old code used two
stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty
inefficient, and adding yet another array deconstruction to do domain
constraint checking seemed very unappetizing.
In the case where we just need a single, very simple coercion function,
doing this straightforwardly leads to a noticeable increase in the
per-array-element runtime cost. Hence, add an additional shortcut evalfunc
in execExprInterp.c that skips unnecessary overhead for that specific form
of expression. The runtime speed of simple cases is within 1% or so of
where it was before, while cases that previously required two levels of
array processing are significantly faster.
Finally, create an implicit array type for every domain type, as we do for
base types, enums, etc. Everything except the array-coercion case seems
to just work without further effort.
Tom Lane, reviewed by Andrew Dunstan
Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
2017-09-30 19:40:56 +02:00
|
|
|
Oid domainArrayOid;
|
2003-08-04 02:43:34 +02:00
|
|
|
Form_pg_type baseType;
|
2006-12-30 22:21:56 +01:00
|
|
|
int32 basetypeMod;
|
2011-02-08 22:04:18 +01:00
|
|
|
Oid baseColl;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/* Convert list of names to a name and namespace */
|
|
|
|
domainNamespace = QualifiedNameGetCreationNamespace(stmt->domainname,
|
|
|
|
&domainName);
|
|
|
|
|
2002-04-27 05:45:03 +02:00
|
|
|
/* Check we have creation rights in target namespace */
|
|
|
|
aclresult = pg_namespace_aclcheck(domainNamespace, GetUserId(),
|
|
|
|
ACL_CREATE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_SCHEMA,
|
2003-08-01 02:15:26 +02:00
|
|
|
get_namespace_name(domainNamespace));
|
2002-04-27 05:45:03 +02:00
|
|
|
|
2007-05-12 02:55:00 +02:00
|
|
|
/*
|
2014-05-06 18:12:18 +02:00
|
|
|
* Check for collision with an existing type name. If there is one and
|
2007-05-12 02:55:00 +02:00
|
|
|
* it's an autogenerated array, we can rename it out of the way.
|
|
|
|
*/
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
old_type_oid = GetSysCacheOid2(TYPENAMENSP, Anum_pg_type_oid,
|
2010-02-14 19:42:19 +01:00
|
|
|
CStringGetDatum(domainName),
|
|
|
|
ObjectIdGetDatum(domainNamespace));
|
2007-05-12 02:55:00 +02:00
|
|
|
if (OidIsValid(old_type_oid))
|
|
|
|
{
|
|
|
|
if (!moveArrayTypeName(old_type_oid, domainName, domainNamespace))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
|
|
|
errmsg("type \"%s\" already exists", domainName)));
|
|
|
|
}
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
|
|
|
* Look up the base type.
|
|
|
|
*/
|
Remove collation information from TypeName, where it does not belong.
The initial collations patch treated a COLLATE spec as part of a TypeName,
following what can only be described as brain fade on the part of the SQL
committee. It's a lot more reasonable to treat COLLATE as a syntactically
separate object, so that it can be added in only the productions where it
actually belongs, rather than needing to reject it in a boatload of places
where it doesn't belong (something the original patch mostly failed to do).
In addition this change lets us meet the spec's requirement to allow
COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly
behavior for constructs such as "foo::type COLLATE collation".
To do this, pull collation information out of TypeName and put it in
ColumnDef instead, thus reverting most of the collation-related changes in
parse_type.c's API. I made one additional structural change, which was to
use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd
nodes. This provides enough room to get rid of the "transform" wart in
AlterTableCmd too, since the ColumnDef can carry the USING expression
easily enough.
Also fix some other minor bugs that have crept in in the same areas,
like failure to copy recently-added fields of ColumnDef in copyfuncs.c.
While at it, document the formerly secret ability to specify a collation
in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and
ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about
what the default collation selection will be when COLLATE is omitted.
BTW, the three-parameter form of format_type() should go away too,
since it just contributes to the confusion in this area; but I'll do
that in a separate patch.
2011-03-10 04:38:52 +01:00
|
|
|
typeTup = typenameType(NULL, stmt->typeName, &basetypeMod);
|
2002-07-12 20:43:19 +02:00
|
|
|
baseType = (Form_pg_type) GETSTRUCT(typeTup);
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
basetypeoid = baseType->oid;
|
2002-07-12 20:43:19 +02:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
2017-10-26 19:47:45 +02:00
|
|
|
* Base type must be a plain base type, a composite type, another domain,
|
|
|
|
* an enum or a range type. Domains over pseudotypes would create a
|
|
|
|
* security hole. (It would be shorter to code this to just check for
|
|
|
|
* pseudotypes; but it seems safer to call out the specific typtypes that
|
|
|
|
* are supported, rather than assume that all future typtypes would be
|
|
|
|
* automatically supported.)
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
2002-07-12 20:43:19 +02:00
|
|
|
typtype = baseType->typtype;
|
2007-04-02 05:49:42 +02:00
|
|
|
if (typtype != TYPTYPE_BASE &&
|
2017-10-26 19:47:45 +02:00
|
|
|
typtype != TYPTYPE_COMPOSITE &&
|
2007-04-02 05:49:42 +02:00
|
|
|
typtype != TYPTYPE_DOMAIN &&
|
2011-11-03 12:16:28 +01:00
|
|
|
typtype != TYPTYPE_ENUM &&
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
typtype != TYPTYPE_RANGE &&
|
|
|
|
typtype != TYPTYPE_MULTIRANGE)
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DATATYPE_MISMATCH),
|
|
|
|
errmsg("\"%s\" is not a valid base type for a domain",
|
2009-07-16 08:33:46 +02:00
|
|
|
TypeNameToString(stmt->typeName))));
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2011-12-19 23:05:19 +01:00
|
|
|
aclresult = pg_type_aclcheck(basetypeoid, GetUserId(), ACL_USAGE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2012-06-15 21:55:03 +02:00
|
|
|
aclcheck_error_type(aclresult, basetypeoid);
|
2011-12-19 23:05:19 +01:00
|
|
|
|
2020-03-06 18:19:29 +01:00
|
|
|
/*
|
|
|
|
* Collect the properties of the new domain. Some are inherited from the
|
|
|
|
* base type, some are not. If you change any of this inheritance
|
|
|
|
* behavior, be sure to update AlterTypeRecurse() to match!
|
|
|
|
*/
|
|
|
|
|
Remove collation information from TypeName, where it does not belong.
The initial collations patch treated a COLLATE spec as part of a TypeName,
following what can only be described as brain fade on the part of the SQL
committee. It's a lot more reasonable to treat COLLATE as a syntactically
separate object, so that it can be added in only the productions where it
actually belongs, rather than needing to reject it in a boatload of places
where it doesn't belong (something the original patch mostly failed to do).
In addition this change lets us meet the spec's requirement to allow
COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly
behavior for constructs such as "foo::type COLLATE collation".
To do this, pull collation information out of TypeName and put it in
ColumnDef instead, thus reverting most of the collation-related changes in
parse_type.c's API. I made one additional structural change, which was to
use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd
nodes. This provides enough room to get rid of the "transform" wart in
AlterTableCmd too, since the ColumnDef can carry the USING expression
easily enough.
Also fix some other minor bugs that have crept in in the same areas,
like failure to copy recently-added fields of ColumnDef in copyfuncs.c.
While at it, document the formerly secret ability to specify a collation
in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and
ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about
what the default collation selection will be when COLLATE is omitted.
BTW, the three-parameter form of format_type() should go away too,
since it just contributes to the confusion in this area; but I'll do
that in a separate patch.
2011-03-10 04:38:52 +01:00
|
|
|
/*
|
|
|
|
* Identify the collation if any
|
|
|
|
*/
|
|
|
|
baseColl = baseType->typcollation;
|
|
|
|
if (stmt->collClause)
|
2011-03-11 22:27:51 +01:00
|
|
|
domaincoll = get_collation_oid(stmt->collClause->collname, false);
|
Remove collation information from TypeName, where it does not belong.
The initial collations patch treated a COLLATE spec as part of a TypeName,
following what can only be described as brain fade on the part of the SQL
committee. It's a lot more reasonable to treat COLLATE as a syntactically
separate object, so that it can be added in only the productions where it
actually belongs, rather than needing to reject it in a boatload of places
where it doesn't belong (something the original patch mostly failed to do).
In addition this change lets us meet the spec's requirement to allow
COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly
behavior for constructs such as "foo::type COLLATE collation".
To do this, pull collation information out of TypeName and put it in
ColumnDef instead, thus reverting most of the collation-related changes in
parse_type.c's API. I made one additional structural change, which was to
use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd
nodes. This provides enough room to get rid of the "transform" wart in
AlterTableCmd too, since the ColumnDef can carry the USING expression
easily enough.
Also fix some other minor bugs that have crept in in the same areas,
like failure to copy recently-added fields of ColumnDef in copyfuncs.c.
While at it, document the formerly secret ability to specify a collation
in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and
ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about
what the default collation selection will be when COLLATE is omitted.
BTW, the three-parameter form of format_type() should go away too,
since it just contributes to the confusion in this area; but I'll do
that in a separate patch.
2011-03-10 04:38:52 +01:00
|
|
|
else
|
|
|
|
domaincoll = baseColl;
|
|
|
|
|
|
|
|
/* Complain if COLLATE is applied to an uncollatable type */
|
|
|
|
if (OidIsValid(domaincoll) && !OidIsValid(baseColl))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DATATYPE_MISMATCH),
|
|
|
|
errmsg("collations are not supported by type %s",
|
|
|
|
format_type_be(basetypeoid))));
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/* passed by value */
|
2002-07-12 20:43:19 +02:00
|
|
|
byValue = baseType->typbyval;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/* Required Alignment */
|
2002-07-12 20:43:19 +02:00
|
|
|
alignment = baseType->typalign;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/* TOAST Strategy */
|
2002-07-12 20:43:19 +02:00
|
|
|
storage = baseType->typstorage;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/* Storage Length */
|
2002-07-12 20:43:19 +02:00
|
|
|
internalLength = baseType->typlen;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
|
|
|
/* Type Category */
|
|
|
|
category = baseType->typcategory;
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/* Array element Delimiter */
|
2002-07-12 20:43:19 +02:00
|
|
|
delimiter = baseType->typdelim;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/* I/O Functions */
|
2006-04-06 00:11:58 +02:00
|
|
|
inputProcedure = F_DOMAIN_IN;
|
2002-07-12 20:43:19 +02:00
|
|
|
outputProcedure = baseType->typoutput;
|
2006-04-06 00:11:58 +02:00
|
|
|
receiveProcedure = F_DOMAIN_RECV;
|
2003-05-09 00:19:58 +02:00
|
|
|
sendProcedure = baseType->typsend;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2006-12-30 22:21:56 +01:00
|
|
|
/* Domains never accept typmods, so no typmodin/typmodout needed */
|
|
|
|
|
2004-02-13 00:41:04 +01:00
|
|
|
/* Analysis function */
|
|
|
|
analyzeProcedure = baseType->typanalyze;
|
|
|
|
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
/*
|
|
|
|
* Domains don't need a subscript procedure, since they are not
|
|
|
|
* subscriptable on their own. If the base type is subscriptable, the
|
|
|
|
* parser will reduce the type to the base type before subscripting.
|
|
|
|
*/
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/* Inherited default value */
|
2002-09-04 22:31:48 +02:00
|
|
|
datum = SysCacheGetAttr(TYPEOID, typeTup,
|
2002-04-15 07:22:04 +02:00
|
|
|
Anum_pg_type_typdefault, &isnull);
|
|
|
|
if (!isnull)
|
2008-03-25 23:42:46 +01:00
|
|
|
defaultValue = TextDatumGetCString(datum);
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/* Inherited default binary value */
|
2002-09-04 22:31:48 +02:00
|
|
|
datum = SysCacheGetAttr(TYPEOID, typeTup,
|
2002-04-15 07:22:04 +02:00
|
|
|
Anum_pg_type_typdefaultbin, &isnull);
|
|
|
|
if (!isnull)
|
2008-03-25 23:42:46 +01:00
|
|
|
defaultValueBin = TextDatumGetCString(datum);
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
/*
|
2003-08-04 02:43:34 +02:00
|
|
|
* Run through constraints manually to avoid the additional processing
|
|
|
|
* conducted by DefineRelation() and friends.
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
|
|
|
foreach(listptr, schema)
|
|
|
|
{
|
2009-07-30 04:45:38 +02:00
|
|
|
Constraint *constr = lfirst(listptr);
|
2002-12-09 21:31:05 +01:00
|
|
|
|
2009-07-30 04:45:38 +02:00
|
|
|
if (!IsA(constr, Constraint))
|
2003-07-20 23:56:35 +02:00
|
|
|
elog(ERROR, "unrecognized node type: %d",
|
2009-07-30 04:45:38 +02:00
|
|
|
(int) nodeTag(constr));
|
2003-07-20 23:56:35 +02:00
|
|
|
switch (constr->contype)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2002-12-09 21:31:05 +01:00
|
|
|
case CONSTR_DEFAULT:
|
2003-08-04 02:43:34 +02:00
|
|
|
|
2002-09-04 22:31:48 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* The inherited default value may be overridden by the user
|
2007-06-20 20:15:49 +02:00
|
|
|
* with the DEFAULT <expr> clause ... but only once.
|
2002-09-04 22:31:48 +02:00
|
|
|
*/
|
2007-06-20 20:15:49 +02:00
|
|
|
if (saw_default)
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2003-09-25 08:58:07 +02:00
|
|
|
errmsg("multiple default expressions")));
|
2007-06-20 20:15:49 +02:00
|
|
|
saw_default = true;
|
|
|
|
|
|
|
|
if (constr->raw_expr)
|
|
|
|
{
|
|
|
|
ParseState *pstate;
|
|
|
|
Node *defaultExpr;
|
|
|
|
|
|
|
|
/* Create a dummy ParseState for transformExpr */
|
|
|
|
pstate = make_parsestate(NULL);
|
|
|
|
|
|
|
|
/*
|
2007-11-15 22:14:46 +01:00
|
|
|
* Cook the constr->raw_expr into an expression. Note:
|
|
|
|
* name is strictly for error message
|
2007-06-20 20:15:49 +02:00
|
|
|
*/
|
|
|
|
defaultExpr = cookDefault(pstate, constr->raw_expr,
|
|
|
|
basetypeoid,
|
|
|
|
basetypeMod,
|
2019-03-30 08:13:09 +01:00
|
|
|
domainName,
|
|
|
|
0);
|
2007-06-20 20:15:49 +02:00
|
|
|
|
|
|
|
/*
|
2007-11-15 22:14:46 +01:00
|
|
|
* If the expression is just a NULL constant, we treat it
|
|
|
|
* like not having a default.
|
2007-10-29 20:40:40 +01:00
|
|
|
*
|
|
|
|
* Note that if the basetype is another domain, we'll see
|
|
|
|
* a CoerceToDomain expr here and not discard the default.
|
|
|
|
* This is critical because the domain default needs to be
|
|
|
|
* retained to override any default that the base domain
|
|
|
|
* might have.
|
2007-06-20 20:15:49 +02:00
|
|
|
*/
|
2007-10-29 20:40:40 +01:00
|
|
|
if (defaultExpr == NULL ||
|
|
|
|
(IsA(defaultExpr, Const) &&
|
|
|
|
((Const *) defaultExpr)->constisnull))
|
|
|
|
{
|
|
|
|
defaultValue = NULL;
|
|
|
|
defaultValueBin = NULL;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Expression must be stored as a nodeToString result,
|
|
|
|
* but we also require a valid textual representation
|
|
|
|
* (mainly to make life easier for pg_dump).
|
|
|
|
*/
|
|
|
|
defaultValue =
|
|
|
|
deparse_expression(defaultExpr,
|
2012-12-31 21:13:26 +01:00
|
|
|
NIL, false, false);
|
2007-10-29 20:40:40 +01:00
|
|
|
defaultValueBin = nodeToString(defaultExpr);
|
|
|
|
}
|
2007-06-20 20:15:49 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2007-10-29 20:40:40 +01:00
|
|
|
/* No default (can this still happen?) */
|
2007-06-20 20:15:49 +02:00
|
|
|
defaultValue = NULL;
|
|
|
|
defaultValueBin = NULL;
|
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
break;
|
|
|
|
|
|
|
|
case CONSTR_NOTNULL:
|
2002-12-12 21:35:16 +01:00
|
|
|
if (nullDefined && !typNotNull)
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("conflicting NULL/NOT NULL constraints")));
|
2002-07-17 00:12:20 +02:00
|
|
|
typNotNull = true;
|
|
|
|
nullDefined = true;
|
2002-09-04 22:31:48 +02:00
|
|
|
break;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
case CONSTR_NULL:
|
2002-12-12 21:35:16 +01:00
|
|
|
if (nullDefined && typNotNull)
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("conflicting NULL/NOT NULL constraints")));
|
2002-07-17 00:12:20 +02:00
|
|
|
typNotNull = false;
|
|
|
|
nullDefined = true;
|
2003-08-04 02:43:34 +02:00
|
|
|
break;
|
|
|
|
|
|
|
|
case CONSTR_CHECK:
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2002-12-09 21:31:05 +01:00
|
|
|
/*
|
2003-08-04 02:43:34 +02:00
|
|
|
* Check constraints are handled after domain creation, as
|
2012-07-24 21:49:54 +02:00
|
|
|
* they require the Oid of the domain; at this point we can
|
2013-05-29 22:58:43 +02:00
|
|
|
* only check that they're not marked NO INHERIT, because that
|
|
|
|
* would be bogus.
|
2002-12-09 21:31:05 +01:00
|
|
|
*/
|
2012-07-24 21:49:54 +02:00
|
|
|
if (constr->is_no_inherit)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2013-01-05 14:25:21 +01:00
|
|
|
errmsg("check constraints for domains cannot be marked NO INHERIT")));
|
2003-08-04 02:43:34 +02:00
|
|
|
break;
|
2002-12-09 21:31:05 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* All else are error cases
|
|
|
|
*/
|
2003-08-04 02:43:34 +02:00
|
|
|
case CONSTR_UNIQUE:
|
|
|
|
ereport(ERROR,
|
2003-09-10 01:22:21 +02:00
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("unique constraints not possible for domains")));
|
2003-08-04 02:43:34 +02:00
|
|
|
break;
|
2002-12-09 21:31:05 +01:00
|
|
|
|
2003-08-04 02:43:34 +02:00
|
|
|
case CONSTR_PRIMARY:
|
|
|
|
ereport(ERROR,
|
2003-09-10 01:22:21 +02:00
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("primary key constraints not possible for domains")));
|
2003-08-04 02:43:34 +02:00
|
|
|
break;
|
2002-12-09 21:31:05 +01:00
|
|
|
|
2009-12-07 06:22:23 +01:00
|
|
|
case CONSTR_EXCLUSION:
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("exclusion constraints not possible for domains")));
|
2009-12-07 06:22:23 +01:00
|
|
|
break;
|
|
|
|
|
2009-07-30 04:45:38 +02:00
|
|
|
case CONSTR_FOREIGN:
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("foreign key constraints not possible for domains")));
|
2009-07-30 04:45:38 +02:00
|
|
|
break;
|
|
|
|
|
2003-08-04 02:43:34 +02:00
|
|
|
case CONSTR_ATTR_DEFERRABLE:
|
|
|
|
case CONSTR_ATTR_NOT_DEFERRABLE:
|
|
|
|
case CONSTR_ATTR_DEFERRED:
|
|
|
|
case CONSTR_ATTR_IMMEDIATE:
|
|
|
|
ereport(ERROR,
|
2003-07-20 23:56:35 +02:00
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
2003-09-10 01:22:21 +02:00
|
|
|
errmsg("specifying constraint deferrability not supported for domains")));
|
2003-08-04 02:43:34 +02:00
|
|
|
break;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
|
|
|
default:
|
2003-07-20 23:56:35 +02:00
|
|
|
elog(ERROR, "unrecognized constraint subtype: %d",
|
|
|
|
(int) constr->contype);
|
2002-09-04 22:31:48 +02:00
|
|
|
break;
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Support arrays over domains.
Allowing arrays with a domain type as their element type was left un-done
in the original domain patch, but not for any very good reason. This
omission leads to such surprising results as array_agg() not working on
a domain column, because the parser can't identify a suitable output type
for the polymorphic aggregate.
In order to fix this, first clean up the APIs of coerce_to_domain() and
some internal functions in parse_coerce.c so that we consistently pass
around a CoercionContext along with CoercionForm. Previously, we sometimes
passed an "isExplicit" boolean flag instead, which is strictly less
information; and coerce_to_domain() didn't even get that, but instead had
to reverse-engineer isExplicit from CoercionForm. That's contrary to the
documentation in primnodes.h that says that CoercionForm only affects
display and not semantics. I don't think this change fixes any live bugs,
but it makes things more consistent. The main reason for doing it though
is that now build_coercion_expression() receives ccontext, which it needs
in order to be able to recursively invoke coerce_to_target_type().
Next, reimplement ArrayCoerceExpr so that the node does not directly know
any details of what has to be done to the individual array elements while
performing the array coercion. Instead, the per-element processing is
represented by a sub-expression whose input is a source array element and
whose output is a target array element. This simplifies life in
parse_coerce.c, because it can build that sub-expression by a recursive
invocation of coerce_to_target_type(). The executor now handles the
per-element processing as a compiled expression instead of hard-wired code.
The main advantage of this is that we can use a single ArrayCoerceExpr to
handle as many as three successive steps per element: base type conversion,
typmod coercion, and domain constraint checking. The old code used two
stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty
inefficient, and adding yet another array deconstruction to do domain
constraint checking seemed very unappetizing.
In the case where we just need a single, very simple coercion function,
doing this straightforwardly leads to a noticeable increase in the
per-array-element runtime cost. Hence, add an additional shortcut evalfunc
in execExprInterp.c that skips unnecessary overhead for that specific form
of expression. The runtime speed of simple cases is within 1% or so of
where it was before, while cases that previously required two levels of
array processing are significantly faster.
Finally, create an implicit array type for every domain type, as we do for
base types, enums, etc. Everything except the array-coercion case seems
to just work without further effort.
Tom Lane, reviewed by Andrew Dunstan
Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
2017-09-30 19:40:56 +02:00
|
|
|
/* Allocate OID for array type */
|
|
|
|
domainArrayOid = AssignTypeArrayOid();
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
|
|
|
* Have TypeCreate do all the real work.
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
address =
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
TypeCreate(InvalidOid, /* no predetermined type OID */
|
|
|
|
domainName, /* type name */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
domainNamespace, /* namespace */
|
2002-09-04 22:31:48 +02:00
|
|
|
InvalidOid, /* relation oid (n/a here) */
|
|
|
|
0, /* relation kind (ditto) */
|
2009-06-11 16:49:15 +02:00
|
|
|
GetUserId(), /* owner's ID */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
internalLength, /* internal size */
|
|
|
|
TYPTYPE_DOMAIN, /* type-type (domain type) */
|
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
|
|
|
category, /* type-category */
|
2008-07-30 21:35:13 +02:00
|
|
|
false, /* domain types are never preferred */
|
2002-09-04 22:31:48 +02:00
|
|
|
delimiter, /* array element delimiter */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
inputProcedure, /* input procedure */
|
|
|
|
outputProcedure, /* output procedure */
|
2003-05-09 00:19:58 +02:00
|
|
|
receiveProcedure, /* receive procedure */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
sendProcedure, /* send procedure */
|
2007-11-15 22:14:46 +01:00
|
|
|
InvalidOid, /* typmodin procedure - none */
|
|
|
|
InvalidOid, /* typmodout procedure - none */
|
2004-02-13 00:41:04 +01:00
|
|
|
analyzeProcedure, /* analyze procedure */
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
InvalidOid, /* subscript procedure - none */
|
Improve handling of domains over arrays.
This patch eliminates various bizarre behaviors caused by sloppy thinking
about the difference between a domain type and its underlying array type.
In particular, the operation of updating one element of such an array
has to be considered as yielding a value of the underlying array type,
*not* a value of the domain, because there's no assurance that the
domain's CHECK constraints are still satisfied. If we're intending to
store the result back into a domain column, we have to re-cast to the
domain type so that constraints are re-checked.
For similar reasons, such a domain can't be blindly matched to an ANYARRAY
polymorphic parameter, because the polymorphic function is likely to apply
array-ish operations that could invalidate the domain constraints. For the
moment, we just forbid such matching. We might later wish to insert an
automatic downcast to the underlying array type, but such a change should
also change matching of domains to ANYELEMENT for consistency.
To ensure that all such logic is rechecked, this patch removes the original
hack of setting a domain's pg_type.typelem field to match its base type;
the typelem will always be zero instead. In those places where it's really
okay to look through the domain type with no other logic changes, use the
newly added get_base_element_type function in place of get_element_type.
catversion bumped due to change in pg_type contents.
Per bug #5717 from Richard Huxton and subsequent discussion.
2010-10-21 22:07:17 +02:00
|
|
|
InvalidOid, /* no array element type */
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
false, /* this isn't an array */
|
Support arrays over domains.
Allowing arrays with a domain type as their element type was left un-done
in the original domain patch, but not for any very good reason. This
omission leads to such surprising results as array_agg() not working on
a domain column, because the parser can't identify a suitable output type
for the polymorphic aggregate.
In order to fix this, first clean up the APIs of coerce_to_domain() and
some internal functions in parse_coerce.c so that we consistently pass
around a CoercionContext along with CoercionForm. Previously, we sometimes
passed an "isExplicit" boolean flag instead, which is strictly less
information; and coerce_to_domain() didn't even get that, but instead had
to reverse-engineer isExplicit from CoercionForm. That's contrary to the
documentation in primnodes.h that says that CoercionForm only affects
display and not semantics. I don't think this change fixes any live bugs,
but it makes things more consistent. The main reason for doing it though
is that now build_coercion_expression() receives ccontext, which it needs
in order to be able to recursively invoke coerce_to_target_type().
Next, reimplement ArrayCoerceExpr so that the node does not directly know
any details of what has to be done to the individual array elements while
performing the array coercion. Instead, the per-element processing is
represented by a sub-expression whose input is a source array element and
whose output is a target array element. This simplifies life in
parse_coerce.c, because it can build that sub-expression by a recursive
invocation of coerce_to_target_type(). The executor now handles the
per-element processing as a compiled expression instead of hard-wired code.
The main advantage of this is that we can use a single ArrayCoerceExpr to
handle as many as three successive steps per element: base type conversion,
typmod coercion, and domain constraint checking. The old code used two
stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty
inefficient, and adding yet another array deconstruction to do domain
constraint checking seemed very unappetizing.
In the case where we just need a single, very simple coercion function,
doing this straightforwardly leads to a noticeable increase in the
per-array-element runtime cost. Hence, add an additional shortcut evalfunc
in execExprInterp.c that skips unnecessary overhead for that specific form
of expression. The runtime speed of simple cases is within 1% or so of
where it was before, while cases that previously required two levels of
array processing are significantly faster.
Finally, create an implicit array type for every domain type, as we do for
base types, enums, etc. Everything except the array-coercion case seems
to just work without further effort.
Tom Lane, reviewed by Andrew Dunstan
Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
2017-09-30 19:40:56 +02:00
|
|
|
domainArrayOid, /* array type we are about to create */
|
2002-09-04 22:31:48 +02:00
|
|
|
basetypeoid, /* base type ID */
|
|
|
|
defaultValue, /* default type value (text) */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
defaultValueBin, /* default type value (binary) */
|
2003-08-04 02:43:34 +02:00
|
|
|
byValue, /* passed by value */
|
|
|
|
alignment, /* required alignment */
|
|
|
|
storage, /* TOAST strategy */
|
2006-12-30 22:21:56 +01:00
|
|
|
basetypeMod, /* typeMod value */
|
2003-08-04 02:43:34 +02:00
|
|
|
typNDims, /* Array dimensions for base type */
|
2011-02-08 22:04:18 +01:00
|
|
|
typNotNull, /* Type NOT NULL */
|
2011-06-09 20:32:50 +02:00
|
|
|
domaincoll); /* type's collation */
|
2002-11-15 03:50:21 +01:00
|
|
|
|
Support arrays over domains.
Allowing arrays with a domain type as their element type was left un-done
in the original domain patch, but not for any very good reason. This
omission leads to such surprising results as array_agg() not working on
a domain column, because the parser can't identify a suitable output type
for the polymorphic aggregate.
In order to fix this, first clean up the APIs of coerce_to_domain() and
some internal functions in parse_coerce.c so that we consistently pass
around a CoercionContext along with CoercionForm. Previously, we sometimes
passed an "isExplicit" boolean flag instead, which is strictly less
information; and coerce_to_domain() didn't even get that, but instead had
to reverse-engineer isExplicit from CoercionForm. That's contrary to the
documentation in primnodes.h that says that CoercionForm only affects
display and not semantics. I don't think this change fixes any live bugs,
but it makes things more consistent. The main reason for doing it though
is that now build_coercion_expression() receives ccontext, which it needs
in order to be able to recursively invoke coerce_to_target_type().
Next, reimplement ArrayCoerceExpr so that the node does not directly know
any details of what has to be done to the individual array elements while
performing the array coercion. Instead, the per-element processing is
represented by a sub-expression whose input is a source array element and
whose output is a target array element. This simplifies life in
parse_coerce.c, because it can build that sub-expression by a recursive
invocation of coerce_to_target_type(). The executor now handles the
per-element processing as a compiled expression instead of hard-wired code.
The main advantage of this is that we can use a single ArrayCoerceExpr to
handle as many as three successive steps per element: base type conversion,
typmod coercion, and domain constraint checking. The old code used two
stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty
inefficient, and adding yet another array deconstruction to do domain
constraint checking seemed very unappetizing.
In the case where we just need a single, very simple coercion function,
doing this straightforwardly leads to a noticeable increase in the
per-array-element runtime cost. Hence, add an additional shortcut evalfunc
in execExprInterp.c that skips unnecessary overhead for that specific form
of expression. The runtime speed of simple cases is within 1% or so of
where it was before, while cases that previously required two levels of
array processing are significantly faster.
Finally, create an implicit array type for every domain type, as we do for
base types, enums, etc. Everything except the array-coercion case seems
to just work without further effort.
Tom Lane, reviewed by Andrew Dunstan
Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
2017-09-30 19:40:56 +02:00
|
|
|
/*
|
|
|
|
* Create the array type that goes with it.
|
|
|
|
*/
|
|
|
|
domainArrayName = makeArrayTypeName(domainName, domainNamespace);
|
|
|
|
|
2020-03-04 16:34:25 +01:00
|
|
|
/* alignment must be TYPALIGN_INT or TYPALIGN_DOUBLE for arrays */
|
|
|
|
alignment = (alignment == TYPALIGN_DOUBLE) ? TYPALIGN_DOUBLE : TYPALIGN_INT;
|
Support arrays over domains.
Allowing arrays with a domain type as their element type was left un-done
in the original domain patch, but not for any very good reason. This
omission leads to such surprising results as array_agg() not working on
a domain column, because the parser can't identify a suitable output type
for the polymorphic aggregate.
In order to fix this, first clean up the APIs of coerce_to_domain() and
some internal functions in parse_coerce.c so that we consistently pass
around a CoercionContext along with CoercionForm. Previously, we sometimes
passed an "isExplicit" boolean flag instead, which is strictly less
information; and coerce_to_domain() didn't even get that, but instead had
to reverse-engineer isExplicit from CoercionForm. That's contrary to the
documentation in primnodes.h that says that CoercionForm only affects
display and not semantics. I don't think this change fixes any live bugs,
but it makes things more consistent. The main reason for doing it though
is that now build_coercion_expression() receives ccontext, which it needs
in order to be able to recursively invoke coerce_to_target_type().
Next, reimplement ArrayCoerceExpr so that the node does not directly know
any details of what has to be done to the individual array elements while
performing the array coercion. Instead, the per-element processing is
represented by a sub-expression whose input is a source array element and
whose output is a target array element. This simplifies life in
parse_coerce.c, because it can build that sub-expression by a recursive
invocation of coerce_to_target_type(). The executor now handles the
per-element processing as a compiled expression instead of hard-wired code.
The main advantage of this is that we can use a single ArrayCoerceExpr to
handle as many as three successive steps per element: base type conversion,
typmod coercion, and domain constraint checking. The old code used two
stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty
inefficient, and adding yet another array deconstruction to do domain
constraint checking seemed very unappetizing.
In the case where we just need a single, very simple coercion function,
doing this straightforwardly leads to a noticeable increase in the
per-array-element runtime cost. Hence, add an additional shortcut evalfunc
in execExprInterp.c that skips unnecessary overhead for that specific form
of expression. The runtime speed of simple cases is within 1% or so of
where it was before, while cases that previously required two levels of
array processing are significantly faster.
Finally, create an implicit array type for every domain type, as we do for
base types, enums, etc. Everything except the array-coercion case seems
to just work without further effort.
Tom Lane, reviewed by Andrew Dunstan
Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
2017-09-30 19:40:56 +02:00
|
|
|
|
|
|
|
TypeCreate(domainArrayOid, /* force assignment of this type OID */
|
|
|
|
domainArrayName, /* type name */
|
|
|
|
domainNamespace, /* namespace */
|
|
|
|
InvalidOid, /* relation oid (n/a here) */
|
|
|
|
0, /* relation kind (ditto) */
|
|
|
|
GetUserId(), /* owner's ID */
|
|
|
|
-1, /* internal size (always varlena) */
|
|
|
|
TYPTYPE_BASE, /* type-type (base type) */
|
|
|
|
TYPCATEGORY_ARRAY, /* type-category (array) */
|
|
|
|
false, /* array types are never preferred */
|
|
|
|
delimiter, /* array element delimiter */
|
|
|
|
F_ARRAY_IN, /* input procedure */
|
|
|
|
F_ARRAY_OUT, /* output procedure */
|
|
|
|
F_ARRAY_RECV, /* receive procedure */
|
|
|
|
F_ARRAY_SEND, /* send procedure */
|
|
|
|
InvalidOid, /* typmodin procedure - none */
|
|
|
|
InvalidOid, /* typmodout procedure - none */
|
|
|
|
F_ARRAY_TYPANALYZE, /* analyze procedure */
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
F_ARRAY_SUBSCRIPT_HANDLER, /* array subscript procedure */
|
Support arrays over domains.
Allowing arrays with a domain type as their element type was left un-done
in the original domain patch, but not for any very good reason. This
omission leads to such surprising results as array_agg() not working on
a domain column, because the parser can't identify a suitable output type
for the polymorphic aggregate.
In order to fix this, first clean up the APIs of coerce_to_domain() and
some internal functions in parse_coerce.c so that we consistently pass
around a CoercionContext along with CoercionForm. Previously, we sometimes
passed an "isExplicit" boolean flag instead, which is strictly less
information; and coerce_to_domain() didn't even get that, but instead had
to reverse-engineer isExplicit from CoercionForm. That's contrary to the
documentation in primnodes.h that says that CoercionForm only affects
display and not semantics. I don't think this change fixes any live bugs,
but it makes things more consistent. The main reason for doing it though
is that now build_coercion_expression() receives ccontext, which it needs
in order to be able to recursively invoke coerce_to_target_type().
Next, reimplement ArrayCoerceExpr so that the node does not directly know
any details of what has to be done to the individual array elements while
performing the array coercion. Instead, the per-element processing is
represented by a sub-expression whose input is a source array element and
whose output is a target array element. This simplifies life in
parse_coerce.c, because it can build that sub-expression by a recursive
invocation of coerce_to_target_type(). The executor now handles the
per-element processing as a compiled expression instead of hard-wired code.
The main advantage of this is that we can use a single ArrayCoerceExpr to
handle as many as three successive steps per element: base type conversion,
typmod coercion, and domain constraint checking. The old code used two
stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty
inefficient, and adding yet another array deconstruction to do domain
constraint checking seemed very unappetizing.
In the case where we just need a single, very simple coercion function,
doing this straightforwardly leads to a noticeable increase in the
per-array-element runtime cost. Hence, add an additional shortcut evalfunc
in execExprInterp.c that skips unnecessary overhead for that specific form
of expression. The runtime speed of simple cases is within 1% or so of
where it was before, while cases that previously required two levels of
array processing are significantly faster.
Finally, create an implicit array type for every domain type, as we do for
base types, enums, etc. Everything except the array-coercion case seems
to just work without further effort.
Tom Lane, reviewed by Andrew Dunstan
Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
2017-09-30 19:40:56 +02:00
|
|
|
address.objectId, /* element type ID */
|
|
|
|
true, /* yes this is an array type */
|
|
|
|
InvalidOid, /* no further array type */
|
|
|
|
InvalidOid, /* base type ID */
|
|
|
|
NULL, /* never a default type value */
|
|
|
|
NULL, /* binary default isn't sent either */
|
|
|
|
false, /* never passed by value */
|
|
|
|
alignment, /* see above */
|
2020-03-04 16:34:25 +01:00
|
|
|
TYPSTORAGE_EXTENDED, /* ARRAY is always toastable */
|
Support arrays over domains.
Allowing arrays with a domain type as their element type was left un-done
in the original domain patch, but not for any very good reason. This
omission leads to such surprising results as array_agg() not working on
a domain column, because the parser can't identify a suitable output type
for the polymorphic aggregate.
In order to fix this, first clean up the APIs of coerce_to_domain() and
some internal functions in parse_coerce.c so that we consistently pass
around a CoercionContext along with CoercionForm. Previously, we sometimes
passed an "isExplicit" boolean flag instead, which is strictly less
information; and coerce_to_domain() didn't even get that, but instead had
to reverse-engineer isExplicit from CoercionForm. That's contrary to the
documentation in primnodes.h that says that CoercionForm only affects
display and not semantics. I don't think this change fixes any live bugs,
but it makes things more consistent. The main reason for doing it though
is that now build_coercion_expression() receives ccontext, which it needs
in order to be able to recursively invoke coerce_to_target_type().
Next, reimplement ArrayCoerceExpr so that the node does not directly know
any details of what has to be done to the individual array elements while
performing the array coercion. Instead, the per-element processing is
represented by a sub-expression whose input is a source array element and
whose output is a target array element. This simplifies life in
parse_coerce.c, because it can build that sub-expression by a recursive
invocation of coerce_to_target_type(). The executor now handles the
per-element processing as a compiled expression instead of hard-wired code.
The main advantage of this is that we can use a single ArrayCoerceExpr to
handle as many as three successive steps per element: base type conversion,
typmod coercion, and domain constraint checking. The old code used two
stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty
inefficient, and adding yet another array deconstruction to do domain
constraint checking seemed very unappetizing.
In the case where we just need a single, very simple coercion function,
doing this straightforwardly leads to a noticeable increase in the
per-array-element runtime cost. Hence, add an additional shortcut evalfunc
in execExprInterp.c that skips unnecessary overhead for that specific form
of expression. The runtime speed of simple cases is within 1% or so of
where it was before, while cases that previously required two levels of
array processing are significantly faster.
Finally, create an implicit array type for every domain type, as we do for
base types, enums, etc. Everything except the array-coercion case seems
to just work without further effort.
Tom Lane, reviewed by Andrew Dunstan
Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
2017-09-30 19:40:56 +02:00
|
|
|
-1, /* typMod (Domains only) */
|
|
|
|
0, /* Array dimensions of typbasetype */
|
|
|
|
false, /* Type NOT NULL */
|
|
|
|
domaincoll); /* type's collation */
|
|
|
|
|
|
|
|
pfree(domainArrayName);
|
|
|
|
|
2002-11-15 03:50:21 +01:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Process constraints which refer to the domain ID returned by TypeCreate
|
2002-11-15 03:50:21 +01:00
|
|
|
*/
|
|
|
|
foreach(listptr, schema)
|
|
|
|
{
|
|
|
|
Constraint *constr = lfirst(listptr);
|
|
|
|
|
2002-12-09 21:31:05 +01:00
|
|
|
/* it must be a Constraint, per check above */
|
|
|
|
|
2002-11-15 03:50:21 +01:00
|
|
|
switch (constr->contype)
|
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
case CONSTR_CHECK:
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
domainAddConstraint(address.objectId, domainNamespace,
|
2006-12-30 22:21:56 +01:00
|
|
|
basetypeoid, basetypeMod,
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
constr, domainName, NULL);
|
2003-08-04 02:43:34 +02:00
|
|
|
break;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-08-04 02:43:34 +02:00
|
|
|
/* Other constraint types were fully processed above */
|
2002-12-09 21:31:05 +01:00
|
|
|
|
2002-11-15 03:50:21 +01:00
|
|
|
default:
|
2003-08-04 02:43:34 +02:00
|
|
|
break;
|
2002-11-15 03:50:21 +01:00
|
|
|
}
|
2004-06-10 19:56:03 +02:00
|
|
|
|
|
|
|
/* CCI so we can detect duplicate constraint names */
|
|
|
|
CommandCounterIncrement();
|
2002-11-15 03:50:21 +01:00
|
|
|
}
|
2002-07-17 00:12:20 +02:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
|
|
|
* Now we can clean up.
|
|
|
|
*/
|
|
|
|
ReleaseSysCache(typeTup);
|
2012-12-24 00:25:03 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2007-04-02 05:49:42 +02:00
|
|
|
/*
|
|
|
|
* DefineEnum
|
|
|
|
* Registers a new enum.
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2007-11-15 23:25:18 +01:00
|
|
|
DefineEnum(CreateEnumStmt *stmt)
|
2007-04-02 05:49:42 +02:00
|
|
|
{
|
2007-11-15 22:14:46 +01:00
|
|
|
char *enumName;
|
|
|
|
char *enumArrayName;
|
|
|
|
Oid enumNamespace;
|
2007-04-02 05:49:42 +02:00
|
|
|
AclResult aclresult;
|
2007-11-15 22:14:46 +01:00
|
|
|
Oid old_type_oid;
|
|
|
|
Oid enumArrayOid;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress enumTypeAddr;
|
2007-04-02 05:49:42 +02:00
|
|
|
|
|
|
|
/* Convert list of names to a name and namespace */
|
2009-07-16 08:33:46 +02:00
|
|
|
enumNamespace = QualifiedNameGetCreationNamespace(stmt->typeName,
|
2007-04-02 05:49:42 +02:00
|
|
|
&enumName);
|
|
|
|
|
|
|
|
/* Check we have creation rights in target namespace */
|
|
|
|
aclresult = pg_namespace_aclcheck(enumNamespace, GetUserId(), ACL_CREATE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_SCHEMA,
|
2007-04-02 05:49:42 +02:00
|
|
|
get_namespace_name(enumNamespace));
|
|
|
|
|
2007-05-12 02:55:00 +02:00
|
|
|
/*
|
2014-05-06 18:12:18 +02:00
|
|
|
* Check for collision with an existing type name. If there is one and
|
2007-05-12 02:55:00 +02:00
|
|
|
* it's an autogenerated array, we can rename it out of the way.
|
|
|
|
*/
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
old_type_oid = GetSysCacheOid2(TYPENAMENSP, Anum_pg_type_oid,
|
2010-02-14 19:42:19 +01:00
|
|
|
CStringGetDatum(enumName),
|
|
|
|
ObjectIdGetDatum(enumNamespace));
|
2007-05-12 02:55:00 +02:00
|
|
|
if (OidIsValid(old_type_oid))
|
|
|
|
{
|
|
|
|
if (!moveArrayTypeName(old_type_oid, enumName, enumNamespace))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
|
|
|
errmsg("type \"%s\" already exists", enumName)));
|
|
|
|
}
|
|
|
|
|
Support arrays over domains.
Allowing arrays with a domain type as their element type was left un-done
in the original domain patch, but not for any very good reason. This
omission leads to such surprising results as array_agg() not working on
a domain column, because the parser can't identify a suitable output type
for the polymorphic aggregate.
In order to fix this, first clean up the APIs of coerce_to_domain() and
some internal functions in parse_coerce.c so that we consistently pass
around a CoercionContext along with CoercionForm. Previously, we sometimes
passed an "isExplicit" boolean flag instead, which is strictly less
information; and coerce_to_domain() didn't even get that, but instead had
to reverse-engineer isExplicit from CoercionForm. That's contrary to the
documentation in primnodes.h that says that CoercionForm only affects
display and not semantics. I don't think this change fixes any live bugs,
but it makes things more consistent. The main reason for doing it though
is that now build_coercion_expression() receives ccontext, which it needs
in order to be able to recursively invoke coerce_to_target_type().
Next, reimplement ArrayCoerceExpr so that the node does not directly know
any details of what has to be done to the individual array elements while
performing the array coercion. Instead, the per-element processing is
represented by a sub-expression whose input is a source array element and
whose output is a target array element. This simplifies life in
parse_coerce.c, because it can build that sub-expression by a recursive
invocation of coerce_to_target_type(). The executor now handles the
per-element processing as a compiled expression instead of hard-wired code.
The main advantage of this is that we can use a single ArrayCoerceExpr to
handle as many as three successive steps per element: base type conversion,
typmod coercion, and domain constraint checking. The old code used two
stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty
inefficient, and adding yet another array deconstruction to do domain
constraint checking seemed very unappetizing.
In the case where we just need a single, very simple coercion function,
doing this straightforwardly leads to a noticeable increase in the
per-array-element runtime cost. Hence, add an additional shortcut evalfunc
in execExprInterp.c that skips unnecessary overhead for that specific form
of expression. The runtime speed of simple cases is within 1% or so of
where it was before, while cases that previously required two levels of
array processing are significantly faster.
Finally, create an implicit array type for every domain type, as we do for
base types, enums, etc. Everything except the array-coercion case seems
to just work without further effort.
Tom Lane, reviewed by Andrew Dunstan
Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
2017-09-30 19:40:56 +02:00
|
|
|
/* Allocate OID for array type */
|
2009-12-24 23:09:24 +01:00
|
|
|
enumArrayOid = AssignTypeArrayOid();
|
2007-04-02 05:49:42 +02:00
|
|
|
|
|
|
|
/* Create the pg_type entry */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
enumTypeAddr =
|
2007-11-15 22:14:46 +01:00
|
|
|
TypeCreate(InvalidOid, /* no predetermined type OID */
|
|
|
|
enumName, /* type name */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
enumNamespace, /* namespace */
|
2007-11-15 22:14:46 +01:00
|
|
|
InvalidOid, /* relation oid (n/a here) */
|
|
|
|
0, /* relation kind (ditto) */
|
2009-06-11 16:49:15 +02:00
|
|
|
GetUserId(), /* owner's ID */
|
2007-11-15 22:14:46 +01:00
|
|
|
sizeof(Oid), /* internal size */
|
2007-04-02 05:49:42 +02:00
|
|
|
TYPTYPE_ENUM, /* type-type (enum type) */
|
Replace the hard-wired type knowledge in TypeCategory() and IsPreferredType()
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
2008-07-30 19:05:05 +02:00
|
|
|
TYPCATEGORY_ENUM, /* type-category (enum type) */
|
2008-07-30 21:35:13 +02:00
|
|
|
false, /* enum types are never preferred */
|
2007-04-02 05:49:42 +02:00
|
|
|
DEFAULT_TYPDELIM, /* array element delimiter */
|
2007-11-15 22:14:46 +01:00
|
|
|
F_ENUM_IN, /* input procedure */
|
|
|
|
F_ENUM_OUT, /* output procedure */
|
|
|
|
F_ENUM_RECV, /* receive procedure */
|
|
|
|
F_ENUM_SEND, /* send procedure */
|
|
|
|
InvalidOid, /* typmodin procedure - none */
|
|
|
|
InvalidOid, /* typmodout procedure - none */
|
|
|
|
InvalidOid, /* analyze procedure - default */
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
InvalidOid, /* subscript procedure - none */
|
2007-11-15 22:14:46 +01:00
|
|
|
InvalidOid, /* element type ID */
|
|
|
|
false, /* this is not an array type */
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
enumArrayOid, /* array type we are about to create */
|
2007-11-15 22:14:46 +01:00
|
|
|
InvalidOid, /* base type ID (only for domains) */
|
|
|
|
NULL, /* never a default type value */
|
|
|
|
NULL, /* binary default isn't sent either */
|
|
|
|
true, /* always passed by value */
|
2020-03-04 16:34:25 +01:00
|
|
|
TYPALIGN_INT, /* int alignment */
|
|
|
|
TYPSTORAGE_PLAIN, /* TOAST strategy always plain */
|
2007-11-15 22:14:46 +01:00
|
|
|
-1, /* typMod (Domains only) */
|
|
|
|
0, /* Array dimensions of typbasetype */
|
2011-02-08 22:04:18 +01:00
|
|
|
false, /* Type NOT NULL */
|
2011-04-22 23:43:18 +02:00
|
|
|
InvalidOid); /* type's collation */
|
2007-04-02 05:49:42 +02:00
|
|
|
|
|
|
|
/* Enter the enum's values into pg_enum */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
EnumValuesCreate(enumTypeAddr.objectId, stmt->vals);
|
2007-04-02 05:49:42 +02:00
|
|
|
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
/*
|
|
|
|
* Create the array type that goes with it.
|
|
|
|
*/
|
|
|
|
enumArrayName = makeArrayTypeName(enumName, enumNamespace);
|
2007-04-02 05:49:42 +02:00
|
|
|
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
TypeCreate(enumArrayOid, /* force assignment of this type OID */
|
2007-11-15 22:14:46 +01:00
|
|
|
enumArrayName, /* type name */
|
|
|
|
enumNamespace, /* namespace */
|
|
|
|
InvalidOid, /* relation oid (n/a here) */
|
|
|
|
0, /* relation kind (ditto) */
|
Repair a longstanding bug in CLUSTER and the rewriting variants of ALTER
TABLE: if the command is executed by someone other than the table owner (eg,
a superuser) and the table has a toast table, the toast table's pg_type row
ends up with the wrong typowner, ie, the command issuer not the table owner.
This is quite harmless for most purposes, since no interesting permissions
checks consult the pg_type row. However, it could lead to unexpected failures
if one later tries to drop the role that issued the command (in 8.1 or 8.2),
or strange warnings from pg_dump afterwards (in 8.3 and up, which will allow
the DROP ROLE because we don't create a "redundant" owner dependency for table
rowtypes). Problem identified by Cott Lang.
Back-patch to 8.1. The problem is actually far older --- the CLUSTER variant
can be demonstrated in 7.0 --- but it's mostly cosmetic before 8.1 because we
didn't track ownership dependencies before 8.1. Also, fixing it before 8.1
would require changing the call signature of heap_create_with_catalog(), which
seems to carry a nontrivial risk of breaking add-on modules.
2009-02-24 02:38:10 +01:00
|
|
|
GetUserId(), /* owner's ID */
|
2007-11-15 22:14:46 +01:00
|
|
|
-1, /* internal size (always varlena) */
|
2007-04-02 05:49:42 +02:00
|
|
|
TYPTYPE_BASE, /* type-type (base type) */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
TYPCATEGORY_ARRAY, /* type-category (array) */
|
2008-07-30 21:35:13 +02:00
|
|
|
false, /* array types are never preferred */
|
2007-11-15 22:14:46 +01:00
|
|
|
DEFAULT_TYPDELIM, /* array element delimiter */
|
|
|
|
F_ARRAY_IN, /* input procedure */
|
|
|
|
F_ARRAY_OUT, /* output procedure */
|
|
|
|
F_ARRAY_RECV, /* receive procedure */
|
|
|
|
F_ARRAY_SEND, /* send procedure */
|
2007-04-02 05:49:42 +02:00
|
|
|
InvalidOid, /* typmodin procedure - none */
|
|
|
|
InvalidOid, /* typmodout procedure - none */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
F_ARRAY_TYPANALYZE, /* analyze procedure */
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
F_ARRAY_SUBSCRIPT_HANDLER, /* array subscript procedure */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
enumTypeAddr.objectId, /* element type ID */
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
true, /* yes this is an array type */
|
|
|
|
InvalidOid, /* no further array type */
|
2007-11-15 22:14:46 +01:00
|
|
|
InvalidOid, /* base type ID */
|
|
|
|
NULL, /* never a default type value */
|
|
|
|
NULL, /* binary default isn't sent either */
|
|
|
|
false, /* never passed by value */
|
2020-03-04 16:34:25 +01:00
|
|
|
TYPALIGN_INT, /* enums have int align, so do their arrays */
|
|
|
|
TYPSTORAGE_EXTENDED, /* ARRAY is always toastable */
|
2007-11-15 22:14:46 +01:00
|
|
|
-1, /* typMod (Domains only) */
|
|
|
|
0, /* Array dimensions of typbasetype */
|
2011-02-08 22:04:18 +01:00
|
|
|
false, /* Type NOT NULL */
|
2011-04-22 23:43:18 +02:00
|
|
|
InvalidOid); /* type's collation */
|
2007-04-02 05:49:42 +02:00
|
|
|
|
|
|
|
pfree(enumArrayName);
|
2012-12-29 13:55:37 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return enumTypeAddr;
|
2007-04-02 05:49:42 +02:00
|
|
|
}
|
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
/*
|
|
|
|
* AlterEnum
|
2018-10-09 01:51:01 +02:00
|
|
|
* Adds a new label to an existing enum.
|
2011-11-21 22:19:53 +01:00
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2018-10-09 01:51:01 +02:00
|
|
|
AlterEnum(AlterEnumStmt *stmt)
|
2011-11-21 22:19:53 +01:00
|
|
|
{
|
|
|
|
Oid enum_type_oid;
|
|
|
|
TypeName *typename;
|
|
|
|
HeapTuple tup;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2011-11-21 22:19:53 +01:00
|
|
|
|
|
|
|
/* Make a TypeName so we can use standard type lookup machinery */
|
|
|
|
typename = makeTypeNameFromNameList(stmt->typeName);
|
|
|
|
enum_type_oid = typenameTypeId(NULL, typename);
|
|
|
|
|
|
|
|
tup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(enum_type_oid));
|
|
|
|
if (!HeapTupleIsValid(tup))
|
|
|
|
elog(ERROR, "cache lookup failed for type %u", enum_type_oid);
|
|
|
|
|
|
|
|
/* Check it's an enum and check user has permission to ALTER the enum */
|
|
|
|
checkEnumOwner(tup);
|
|
|
|
|
2018-10-09 01:51:01 +02:00
|
|
|
ReleaseSysCache(tup);
|
|
|
|
|
2016-09-07 22:11:56 +02:00
|
|
|
if (stmt->oldVal)
|
|
|
|
{
|
|
|
|
/* Rename an existing label */
|
|
|
|
RenameEnumLabel(enum_type_oid, stmt->oldVal, stmt->newVal);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Add a new label */
|
|
|
|
AddEnumLabel(enum_type_oid, stmt->newVal,
|
|
|
|
stmt->newValNeighbor, stmt->newValIsAfter,
|
|
|
|
stmt->skipIfNewValExists);
|
|
|
|
}
|
2011-11-21 22:19:53 +01:00
|
|
|
|
2013-03-18 03:55:14 +01:00
|
|
|
InvokeObjectPostAlterHook(TypeRelationId, enum_type_oid, 0);
|
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddressSet(address, TypeRelationId, enum_type_oid);
|
|
|
|
|
|
|
|
return address;
|
2011-11-21 22:19:53 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* checkEnumOwner
|
|
|
|
*
|
|
|
|
* Check that the type is actually an enum and that the current user
|
|
|
|
* has permission to do ALTER TYPE on it. Throw an error if not.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
checkEnumOwner(HeapTuple tup)
|
|
|
|
{
|
|
|
|
Form_pg_type typTup = (Form_pg_type) GETSTRUCT(tup);
|
|
|
|
|
|
|
|
/* Check that this is actually an enum */
|
|
|
|
if (typTup->typtype != TYPTYPE_ENUM)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("%s is not an enum",
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
format_type_be(typTup->oid))));
|
2011-11-21 22:19:53 +01:00
|
|
|
|
|
|
|
/* Permission check: must own type */
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
if (!pg_type_ownercheck(typTup->oid, GetUserId()))
|
|
|
|
aclcheck_error_type(ACLCHECK_NOT_OWNER, typTup->oid);
|
2011-11-21 22:19:53 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2011-11-03 12:16:28 +01:00
|
|
|
/*
|
|
|
|
* DefineRange
|
|
|
|
* Registers a new range type.
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
*
|
|
|
|
* Perhaps it might be worthwhile to set pg_type.typelem to the base type,
|
|
|
|
* and likewise on multiranges to set it to the range type. But having a
|
|
|
|
* non-zero typelem is treated elsewhere as a synonym for being an array,
|
|
|
|
* and users might have queries with that same assumption.
|
2011-11-03 12:16:28 +01:00
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2011-11-14 18:12:23 +01:00
|
|
|
DefineRange(CreateRangeStmt *stmt)
|
2011-11-03 12:16:28 +01:00
|
|
|
{
|
2011-11-14 18:08:48 +01:00
|
|
|
char *typeName;
|
|
|
|
Oid typeNamespace;
|
|
|
|
Oid typoid;
|
2011-11-21 22:19:53 +01:00
|
|
|
char *rangeArrayName;
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
char *multirangeTypeName = NULL;
|
|
|
|
char *multirangeArrayName;
|
|
|
|
Oid multirangeNamespace = InvalidOid;
|
2011-11-14 18:08:48 +01:00
|
|
|
Oid rangeArrayOid;
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
Oid multirangeOid;
|
|
|
|
Oid multirangeArrayOid;
|
2011-11-21 22:19:53 +01:00
|
|
|
Oid rangeSubtype = InvalidOid;
|
2011-11-14 18:08:48 +01:00
|
|
|
List *rangeSubOpclassName = NIL;
|
|
|
|
List *rangeCollationName = NIL;
|
2011-11-21 22:19:53 +01:00
|
|
|
List *rangeCanonicalName = NIL;
|
|
|
|
List *rangeSubtypeDiffName = NIL;
|
|
|
|
Oid rangeSubOpclass;
|
|
|
|
Oid rangeCollation;
|
|
|
|
regproc rangeCanonical;
|
|
|
|
regproc rangeSubtypeDiff;
|
2011-11-15 03:42:04 +01:00
|
|
|
int16 subtyplen;
|
|
|
|
bool subtypbyval;
|
|
|
|
char subtypalign;
|
|
|
|
char alignment;
|
2011-11-14 18:08:48 +01:00
|
|
|
AclResult aclresult;
|
2011-11-15 03:42:04 +01:00
|
|
|
ListCell *lc;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2020-12-20 14:27:01 +01:00
|
|
|
ObjectAddress mltrngaddress PG_USED_FOR_ASSERTS_ONLY;
|
2021-06-15 14:59:20 +02:00
|
|
|
Oid singleArgContructorOid;
|
2011-11-03 12:16:28 +01:00
|
|
|
|
|
|
|
/* Convert list of names to a name and namespace */
|
|
|
|
typeNamespace = QualifiedNameGetCreationNamespace(stmt->typeName,
|
|
|
|
&typeName);
|
|
|
|
|
|
|
|
/* Check we have creation rights in target namespace */
|
|
|
|
aclresult = pg_namespace_aclcheck(typeNamespace, GetUserId(), ACL_CREATE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_SCHEMA,
|
2011-11-03 12:16:28 +01:00
|
|
|
get_namespace_name(typeNamespace));
|
|
|
|
|
|
|
|
/*
|
2011-11-21 22:19:53 +01:00
|
|
|
* Look to see if type already exists.
|
2011-11-03 12:16:28 +01:00
|
|
|
*/
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
typoid = GetSysCacheOid2(TYPENAMENSP, Anum_pg_type_oid,
|
2011-11-03 12:16:28 +01:00
|
|
|
CStringGetDatum(typeName),
|
|
|
|
ObjectIdGetDatum(typeNamespace));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If it's not a shell, see if it's an autogenerated array type, and if so
|
|
|
|
* rename it out of the way.
|
|
|
|
*/
|
|
|
|
if (OidIsValid(typoid) && get_typisdefined(typoid))
|
|
|
|
{
|
|
|
|
if (moveArrayTypeName(typoid, typeName, typeNamespace))
|
|
|
|
typoid = InvalidOid;
|
2011-11-21 22:19:53 +01:00
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
|
|
|
errmsg("type \"%s\" already exists", typeName)));
|
2011-11-03 12:16:28 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2020-03-05 21:48:56 +01:00
|
|
|
* Unlike DefineType(), we don't insist on a shell type existing first, as
|
|
|
|
* it's only needed if the user wants to specify a canonical function.
|
2011-11-03 12:16:28 +01:00
|
|
|
*/
|
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
/* Extract the parameters from the parameter list */
|
2011-11-03 12:16:28 +01:00
|
|
|
foreach(lc, stmt->params)
|
|
|
|
{
|
2011-11-21 22:19:53 +01:00
|
|
|
DefElem *defel = (DefElem *) lfirst(lc);
|
2011-11-03 12:16:28 +01:00
|
|
|
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
if (strcmp(defel->defname, "subtype") == 0)
|
2011-11-03 12:16:28 +01:00
|
|
|
{
|
|
|
|
if (OidIsValid(rangeSubtype))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("conflicting or redundant options")));
|
2011-11-21 22:19:53 +01:00
|
|
|
/* we can look up the subtype name immediately */
|
2011-11-03 12:16:28 +01:00
|
|
|
rangeSubtype = typenameTypeId(NULL, defGetTypeName(defel));
|
|
|
|
}
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "subtype_opclass") == 0)
|
2011-11-03 12:16:28 +01:00
|
|
|
{
|
2011-11-21 22:19:53 +01:00
|
|
|
if (rangeSubOpclassName != NIL)
|
2011-11-03 12:16:28 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("conflicting or redundant options")));
|
2011-11-21 22:19:53 +01:00
|
|
|
rangeSubOpclassName = defGetQualifiedName(defel);
|
2011-11-03 12:16:28 +01:00
|
|
|
}
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "collation") == 0)
|
2011-11-03 12:16:28 +01:00
|
|
|
{
|
|
|
|
if (rangeCollationName != NIL)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("conflicting or redundant options")));
|
|
|
|
rangeCollationName = defGetQualifiedName(defel);
|
|
|
|
}
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "canonical") == 0)
|
2011-11-03 12:16:28 +01:00
|
|
|
{
|
2011-11-21 22:19:53 +01:00
|
|
|
if (rangeCanonicalName != NIL)
|
2011-11-03 12:16:28 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("conflicting or redundant options")));
|
2011-11-21 22:19:53 +01:00
|
|
|
rangeCanonicalName = defGetQualifiedName(defel);
|
2011-11-03 12:16:28 +01:00
|
|
|
}
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
else if (strcmp(defel->defname, "subtype_diff") == 0)
|
2011-11-03 12:16:28 +01:00
|
|
|
{
|
2011-11-21 22:19:53 +01:00
|
|
|
if (rangeSubtypeDiffName != NIL)
|
2011-11-03 12:16:28 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("conflicting or redundant options")));
|
2011-11-21 22:19:53 +01:00
|
|
|
rangeSubtypeDiffName = defGetQualifiedName(defel);
|
2011-11-03 12:16:28 +01:00
|
|
|
}
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
else if (strcmp(defel->defname, "multirange_type_name") == 0)
|
|
|
|
{
|
|
|
|
if (multirangeTypeName != NULL)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("conflicting or redundant options")));
|
|
|
|
/* we can look up the subtype name immediately */
|
|
|
|
multirangeNamespace = QualifiedNameGetCreationNamespace(defGetQualifiedName(defel),
|
|
|
|
&multirangeTypeName);
|
|
|
|
}
|
2011-11-03 12:16:28 +01:00
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("type attribute \"%s\" not recognized",
|
|
|
|
defel->defname)));
|
|
|
|
}
|
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
/* Must have a subtype */
|
2011-11-03 12:16:28 +01:00
|
|
|
if (!OidIsValid(rangeSubtype))
|
2011-11-14 18:08:48 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("type attribute \"subtype\" is required")));
|
2011-11-21 22:19:53 +01:00
|
|
|
/* disallow ranges of pseudotypes */
|
|
|
|
if (get_typtype(rangeSubtype) == TYPTYPE_PSEUDO)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DATATYPE_MISMATCH),
|
|
|
|
errmsg("range subtype cannot be %s",
|
|
|
|
format_type_be(rangeSubtype))));
|
2011-11-03 12:16:28 +01:00
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
/* Identify subopclass */
|
|
|
|
rangeSubOpclass = findRangeSubOpclass(rangeSubOpclassName, rangeSubtype);
|
|
|
|
|
|
|
|
/* Identify collation to use, if any */
|
2011-11-03 12:16:28 +01:00
|
|
|
if (type_is_collatable(rangeSubtype))
|
|
|
|
{
|
2011-11-21 22:19:53 +01:00
|
|
|
if (rangeCollationName != NIL)
|
2011-11-03 12:16:28 +01:00
|
|
|
rangeCollation = get_collation_oid(rangeCollationName, false);
|
2011-11-21 22:19:53 +01:00
|
|
|
else
|
|
|
|
rangeCollation = get_typcollation(rangeSubtype);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
if (rangeCollationName != NIL)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("range collation specified but subtype does not support collation")));
|
|
|
|
rangeCollation = InvalidOid;
|
2011-11-03 12:16:28 +01:00
|
|
|
}
|
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
/* Identify support functions, if provided */
|
|
|
|
if (rangeCanonicalName != NIL)
|
2020-03-05 21:48:56 +01:00
|
|
|
{
|
|
|
|
if (!OidIsValid(typoid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("cannot specify a canonical function without a pre-created shell type"),
|
|
|
|
errhint("Create the type as a shell type, then create its canonicalization function, then do a full CREATE TYPE.")));
|
2011-11-21 22:19:53 +01:00
|
|
|
rangeCanonical = findRangeCanonicalFunction(rangeCanonicalName,
|
|
|
|
typoid);
|
2020-03-05 21:48:56 +01:00
|
|
|
}
|
2011-11-21 22:19:53 +01:00
|
|
|
else
|
|
|
|
rangeCanonical = InvalidOid;
|
2011-11-03 12:16:28 +01:00
|
|
|
|
|
|
|
if (rangeSubtypeDiffName != NIL)
|
2011-11-15 03:42:04 +01:00
|
|
|
rangeSubtypeDiff = findRangeSubtypeDiffFunction(rangeSubtypeDiffName,
|
|
|
|
rangeSubtype);
|
2011-11-21 22:19:53 +01:00
|
|
|
else
|
|
|
|
rangeSubtypeDiff = InvalidOid;
|
|
|
|
|
2011-11-15 03:42:04 +01:00
|
|
|
get_typlenbyvalalign(rangeSubtype,
|
|
|
|
&subtyplen, &subtypbyval, &subtypalign);
|
|
|
|
|
2020-03-04 16:34:25 +01:00
|
|
|
/* alignment must be TYPALIGN_INT or TYPALIGN_DOUBLE for ranges */
|
|
|
|
alignment = (subtypalign == TYPALIGN_DOUBLE) ? TYPALIGN_DOUBLE : TYPALIGN_INT;
|
2011-11-15 03:42:04 +01:00
|
|
|
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
/* Allocate OID for array type, its multirange, and its multirange array */
|
2011-11-03 12:16:28 +01:00
|
|
|
rangeArrayOid = AssignTypeArrayOid();
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
multirangeOid = AssignTypeMultirangeOid();
|
|
|
|
multirangeArrayOid = AssignTypeMultirangeArrayOid();
|
2011-11-03 12:16:28 +01:00
|
|
|
|
|
|
|
/* Create the pg_type entry */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
address =
|
2011-11-03 12:16:28 +01:00
|
|
|
TypeCreate(InvalidOid, /* no predetermined type OID */
|
|
|
|
typeName, /* type name */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
typeNamespace, /* namespace */
|
2011-11-03 12:16:28 +01:00
|
|
|
InvalidOid, /* relation oid (n/a here) */
|
|
|
|
0, /* relation kind (ditto) */
|
|
|
|
GetUserId(), /* owner's ID */
|
2011-11-15 03:42:04 +01:00
|
|
|
-1, /* internal size (always varlena) */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
TYPTYPE_RANGE, /* type-type (range type) */
|
2011-11-03 12:16:28 +01:00
|
|
|
TYPCATEGORY_RANGE, /* type-category (range type) */
|
|
|
|
false, /* range types are never preferred */
|
|
|
|
DEFAULT_TYPDELIM, /* array element delimiter */
|
|
|
|
F_RANGE_IN, /* input procedure */
|
2011-11-14 18:08:48 +01:00
|
|
|
F_RANGE_OUT, /* output procedure */
|
|
|
|
F_RANGE_RECV, /* receive procedure */
|
|
|
|
F_RANGE_SEND, /* send procedure */
|
2011-11-03 12:16:28 +01:00
|
|
|
InvalidOid, /* typmodin procedure - none */
|
|
|
|
InvalidOid, /* typmodout procedure - none */
|
2011-11-23 06:03:22 +01:00
|
|
|
F_RANGE_TYPANALYZE, /* analyze procedure */
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
InvalidOid, /* subscript procedure - none */
|
2011-11-15 03:42:04 +01:00
|
|
|
InvalidOid, /* element type ID - none */
|
2011-11-03 12:16:28 +01:00
|
|
|
false, /* this is not an array type */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
rangeArrayOid, /* array type we are about to create */
|
2011-11-03 12:16:28 +01:00
|
|
|
InvalidOid, /* base type ID (only for domains) */
|
|
|
|
NULL, /* never a default type value */
|
2011-11-21 22:19:53 +01:00
|
|
|
NULL, /* no binary form available either */
|
2011-11-03 12:16:28 +01:00
|
|
|
false, /* never passed by value */
|
2011-11-15 03:42:04 +01:00
|
|
|
alignment, /* alignment */
|
2020-03-04 16:34:25 +01:00
|
|
|
TYPSTORAGE_EXTENDED, /* TOAST strategy (always extended) */
|
2011-11-03 12:16:28 +01:00
|
|
|
-1, /* typMod (Domains only) */
|
|
|
|
0, /* Array dimensions of typbasetype */
|
|
|
|
false, /* Type NOT NULL */
|
2011-11-21 22:19:53 +01:00
|
|
|
InvalidOid); /* type's collation (ranges never have one) */
|
2020-03-05 21:48:56 +01:00
|
|
|
Assert(typoid == InvalidOid || typoid == address.objectId);
|
|
|
|
typoid = address.objectId;
|
2011-11-03 12:16:28 +01:00
|
|
|
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
/* Create the multirange that goes with it */
|
|
|
|
if (multirangeTypeName)
|
|
|
|
{
|
2021-05-12 19:14:10 +02:00
|
|
|
Oid old_typoid;
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Look to see if multirange type already exists.
|
|
|
|
*/
|
|
|
|
old_typoid = GetSysCacheOid2(TYPENAMENSP, Anum_pg_type_oid,
|
|
|
|
CStringGetDatum(multirangeTypeName),
|
|
|
|
ObjectIdGetDatum(multirangeNamespace));
|
|
|
|
|
|
|
|
/*
|
2021-05-12 19:14:10 +02:00
|
|
|
* If it's not a shell, see if it's an autogenerated array type, and
|
|
|
|
* if so rename it out of the way.
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
*/
|
|
|
|
if (OidIsValid(old_typoid) && get_typisdefined(old_typoid))
|
|
|
|
{
|
|
|
|
if (!moveArrayTypeName(old_typoid, multirangeTypeName, multirangeNamespace))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
|
|
|
errmsg("type \"%s\" already exists", multirangeTypeName)));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Generate multirange name automatically */
|
|
|
|
multirangeNamespace = typeNamespace;
|
|
|
|
multirangeTypeName = makeMultirangeTypeName(typeName, multirangeNamespace);
|
|
|
|
}
|
|
|
|
|
|
|
|
mltrngaddress =
|
|
|
|
TypeCreate(multirangeOid, /* force assignment of this type OID */
|
|
|
|
multirangeTypeName, /* type name */
|
2021-05-12 19:14:10 +02:00
|
|
|
multirangeNamespace, /* namespace */
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
InvalidOid, /* relation oid (n/a here) */
|
|
|
|
0, /* relation kind (ditto) */
|
|
|
|
GetUserId(), /* owner's ID */
|
|
|
|
-1, /* internal size (always varlena) */
|
|
|
|
TYPTYPE_MULTIRANGE, /* type-type (multirange type) */
|
|
|
|
TYPCATEGORY_RANGE, /* type-category (range type) */
|
|
|
|
false, /* multirange types are never preferred */
|
|
|
|
DEFAULT_TYPDELIM, /* array element delimiter */
|
|
|
|
F_MULTIRANGE_IN, /* input procedure */
|
|
|
|
F_MULTIRANGE_OUT, /* output procedure */
|
|
|
|
F_MULTIRANGE_RECV, /* receive procedure */
|
|
|
|
F_MULTIRANGE_SEND, /* send procedure */
|
|
|
|
InvalidOid, /* typmodin procedure - none */
|
|
|
|
InvalidOid, /* typmodout procedure - none */
|
|
|
|
F_MULTIRANGE_TYPANALYZE, /* analyze procedure */
|
|
|
|
InvalidOid, /* subscript procedure - none */
|
|
|
|
InvalidOid, /* element type ID - none */
|
|
|
|
false, /* this is not an array type */
|
|
|
|
multirangeArrayOid, /* array type we are about to create */
|
|
|
|
InvalidOid, /* base type ID (only for domains) */
|
|
|
|
NULL, /* never a default type value */
|
|
|
|
NULL, /* no binary form available either */
|
|
|
|
false, /* never passed by value */
|
|
|
|
alignment, /* alignment */
|
|
|
|
'x', /* TOAST strategy (always extended) */
|
|
|
|
-1, /* typMod (Domains only) */
|
|
|
|
0, /* Array dimensions of typbasetype */
|
|
|
|
false, /* Type NOT NULL */
|
|
|
|
InvalidOid); /* type's collation (ranges never have one) */
|
|
|
|
Assert(multirangeOid == mltrngaddress.objectId);
|
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
/* Create the entry in pg_range */
|
2011-11-03 12:16:28 +01:00
|
|
|
RangeCreate(typoid, rangeSubtype, rangeCollation, rangeSubOpclass,
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
rangeCanonical, rangeSubtypeDiff, multirangeOid);
|
2011-11-03 12:16:28 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Create the array type that goes with it.
|
|
|
|
*/
|
|
|
|
rangeArrayName = makeArrayTypeName(typeName, typeNamespace);
|
|
|
|
|
|
|
|
TypeCreate(rangeArrayOid, /* force assignment of this type OID */
|
|
|
|
rangeArrayName, /* type name */
|
|
|
|
typeNamespace, /* namespace */
|
|
|
|
InvalidOid, /* relation oid (n/a here) */
|
|
|
|
0, /* relation kind (ditto) */
|
|
|
|
GetUserId(), /* owner's ID */
|
|
|
|
-1, /* internal size (always varlena) */
|
|
|
|
TYPTYPE_BASE, /* type-type (base type) */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
TYPCATEGORY_ARRAY, /* type-category (array) */
|
2011-11-03 12:16:28 +01:00
|
|
|
false, /* array types are never preferred */
|
|
|
|
DEFAULT_TYPDELIM, /* array element delimiter */
|
|
|
|
F_ARRAY_IN, /* input procedure */
|
|
|
|
F_ARRAY_OUT, /* output procedure */
|
|
|
|
F_ARRAY_RECV, /* receive procedure */
|
|
|
|
F_ARRAY_SEND, /* send procedure */
|
|
|
|
InvalidOid, /* typmodin procedure - none */
|
|
|
|
InvalidOid, /* typmodout procedure - none */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
F_ARRAY_TYPANALYZE, /* analyze procedure */
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
F_ARRAY_SUBSCRIPT_HANDLER, /* array subscript procedure */
|
2011-11-14 18:08:48 +01:00
|
|
|
typoid, /* element type ID */
|
2011-11-03 12:16:28 +01:00
|
|
|
true, /* yes this is an array type */
|
|
|
|
InvalidOid, /* no further array type */
|
|
|
|
InvalidOid, /* base type ID */
|
|
|
|
NULL, /* never a default type value */
|
|
|
|
NULL, /* binary default isn't sent either */
|
|
|
|
false, /* never passed by value */
|
2011-11-15 03:42:04 +01:00
|
|
|
alignment, /* alignment - same as range's */
|
2020-03-04 16:34:25 +01:00
|
|
|
TYPSTORAGE_EXTENDED, /* ARRAY is always toastable */
|
2011-11-03 12:16:28 +01:00
|
|
|
-1, /* typMod (Domains only) */
|
|
|
|
0, /* Array dimensions of typbasetype */
|
|
|
|
false, /* Type NOT NULL */
|
|
|
|
InvalidOid); /* typcollation */
|
|
|
|
|
|
|
|
pfree(rangeArrayName);
|
|
|
|
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
/* Create the multirange's array type */
|
|
|
|
|
|
|
|
multirangeArrayName = makeArrayTypeName(multirangeTypeName, typeNamespace);
|
|
|
|
|
|
|
|
TypeCreate(multirangeArrayOid, /* force assignment of this type OID */
|
|
|
|
multirangeArrayName, /* type name */
|
2021-05-12 19:14:10 +02:00
|
|
|
multirangeNamespace, /* namespace */
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
InvalidOid, /* relation oid (n/a here) */
|
|
|
|
0, /* relation kind (ditto) */
|
|
|
|
GetUserId(), /* owner's ID */
|
|
|
|
-1, /* internal size (always varlena) */
|
|
|
|
TYPTYPE_BASE, /* type-type (base type) */
|
|
|
|
TYPCATEGORY_ARRAY, /* type-category (array) */
|
|
|
|
false, /* array types are never preferred */
|
|
|
|
DEFAULT_TYPDELIM, /* array element delimiter */
|
|
|
|
F_ARRAY_IN, /* input procedure */
|
|
|
|
F_ARRAY_OUT, /* output procedure */
|
|
|
|
F_ARRAY_RECV, /* receive procedure */
|
|
|
|
F_ARRAY_SEND, /* send procedure */
|
|
|
|
InvalidOid, /* typmodin procedure - none */
|
|
|
|
InvalidOid, /* typmodout procedure - none */
|
|
|
|
F_ARRAY_TYPANALYZE, /* analyze procedure */
|
|
|
|
F_ARRAY_SUBSCRIPT_HANDLER, /* array subscript procedure */
|
|
|
|
multirangeOid, /* element type ID */
|
|
|
|
true, /* yes this is an array type */
|
|
|
|
InvalidOid, /* no further array type */
|
|
|
|
InvalidOid, /* base type ID */
|
|
|
|
NULL, /* never a default type value */
|
|
|
|
NULL, /* binary default isn't sent either */
|
|
|
|
false, /* never passed by value */
|
|
|
|
alignment, /* alignment - same as range's */
|
|
|
|
'x', /* ARRAY is always toastable */
|
|
|
|
-1, /* typMod (Domains only) */
|
|
|
|
0, /* Array dimensions of typbasetype */
|
|
|
|
false, /* Type NOT NULL */
|
|
|
|
InvalidOid); /* typcollation */
|
|
|
|
|
2011-11-15 03:42:04 +01:00
|
|
|
/* And create the constructor functions for this range type */
|
2011-11-21 22:19:53 +01:00
|
|
|
makeRangeConstructors(typeName, typeNamespace, typoid, rangeSubtype);
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
makeMultirangeConstructors(multirangeTypeName, typeNamespace,
|
|
|
|
multirangeOid, typoid, rangeArrayOid,
|
2021-06-15 14:59:20 +02:00
|
|
|
&singleArgContructorOid);
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
|
2021-06-15 14:59:20 +02:00
|
|
|
/* Create casts for this multirange type */
|
|
|
|
makeMultirangeCasts(multirangeTypeName, typeNamespace,
|
|
|
|
multirangeOid, typoid, rangeArrayOid,
|
|
|
|
singleArgContructorOid);
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
|
|
|
|
pfree(multirangeTypeName);
|
|
|
|
pfree(multirangeArrayName);
|
2012-12-29 13:55:37 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2011-11-03 12:16:28 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2011-11-21 22:19:53 +01:00
|
|
|
* Because there may exist several range types over the same subtype, the
|
|
|
|
* range type can't be uniquely determined from the subtype. So it's
|
|
|
|
* impossible to define a polymorphic constructor; we have to generate new
|
|
|
|
* constructor functions explicitly for each range type.
|
2011-11-03 12:16:28 +01:00
|
|
|
*
|
2014-05-06 18:12:18 +02:00
|
|
|
* We actually define 4 functions, with 0 through 3 arguments. This is just
|
2011-11-21 22:19:53 +01:00
|
|
|
* to offer more convenience for the user.
|
2011-11-03 12:16:28 +01:00
|
|
|
*/
|
|
|
|
static void
|
2011-11-21 22:19:53 +01:00
|
|
|
makeRangeConstructors(const char *name, Oid namespace,
|
|
|
|
Oid rangeOid, Oid subtype)
|
2011-11-03 12:16:28 +01:00
|
|
|
{
|
2012-06-10 21:20:04 +02:00
|
|
|
static const char *const prosrc[2] = {"range_constructor2",
|
|
|
|
"range_constructor3"};
|
2011-11-23 02:45:05 +01:00
|
|
|
static const int pronargs[2] = {2, 3};
|
2011-11-21 22:19:53 +01:00
|
|
|
|
2011-11-14 18:08:48 +01:00
|
|
|
Oid constructorArgTypes[3];
|
2011-11-21 22:19:53 +01:00
|
|
|
ObjectAddress myself,
|
|
|
|
referenced;
|
2011-11-14 18:08:48 +01:00
|
|
|
int i;
|
2011-11-03 12:16:28 +01:00
|
|
|
|
|
|
|
constructorArgTypes[0] = subtype;
|
|
|
|
constructorArgTypes[1] = subtype;
|
|
|
|
constructorArgTypes[2] = TEXTOID;
|
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
referenced.classId = TypeRelationId;
|
|
|
|
referenced.objectId = rangeOid;
|
|
|
|
referenced.objectSubId = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < lengthof(prosrc); i++)
|
2011-11-03 12:16:28 +01:00
|
|
|
{
|
2011-11-14 18:08:48 +01:00
|
|
|
oidvector *constructorArgTypesVector;
|
2011-11-03 12:16:28 +01:00
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
constructorArgTypesVector = buildoidvector(constructorArgTypes,
|
|
|
|
pronargs[i]);
|
2011-11-03 12:16:28 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
myself = ProcedureCreate(name, /* name: same as range type */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
namespace, /* namespace */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
false, /* replace */
|
|
|
|
false, /* returns set */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
rangeOid, /* return type */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
BOOTSTRAP_SUPERUSERID, /* proowner */
|
|
|
|
INTERNALlanguageId, /* language */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
F_FMGR_INTERNAL_VALIDATOR, /* language validator */
|
|
|
|
prosrc[i], /* prosrc */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
NULL, /* probin */
|
2021-04-07 21:30:08 +02:00
|
|
|
NULL, /* prosqlbody */
|
2018-03-02 14:57:38 +01:00
|
|
|
PROKIND_FUNCTION,
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
false, /* security_definer */
|
|
|
|
false, /* leakproof */
|
|
|
|
false, /* isStrict */
|
|
|
|
PROVOLATILE_IMMUTABLE, /* volatility */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
PROPARALLEL_SAFE, /* parallel safety */
|
|
|
|
constructorArgTypesVector, /* parameterTypes */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
PointerGetDatum(NULL), /* allParameterTypes */
|
|
|
|
PointerGetDatum(NULL), /* parameterModes */
|
|
|
|
PointerGetDatum(NULL), /* parameterNames */
|
|
|
|
NIL, /* parameterDefaults */
|
2015-05-24 03:35:49 +02:00
|
|
|
PointerGetDatum(NULL), /* trftypes */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
PointerGetDatum(NULL), /* proconfig */
|
2019-02-10 00:08:48 +01:00
|
|
|
InvalidOid, /* prosupport */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
1.0, /* procost */
|
|
|
|
0.0); /* prorows */
|
2011-11-03 12:16:28 +01:00
|
|
|
|
|
|
|
/*
|
2011-11-21 05:50:27 +01:00
|
|
|
* Make the constructors internally-dependent on the range type so
|
|
|
|
* that they go away silently when the type is dropped. Note that
|
|
|
|
* pg_dump depends on this choice to avoid dumping the constructors.
|
2011-11-03 12:16:28 +01:00
|
|
|
*/
|
|
|
|
recordDependencyOn(&myself, &referenced, DEPENDENCY_INTERNAL);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
/*
|
|
|
|
* We make a separate multirange constructor for each range type
|
|
|
|
* so its name can include the base type, like range constructors do.
|
|
|
|
* If we had an anyrangearray polymorphic type we could use it here,
|
|
|
|
* but since each type has its own constructor name there's no need.
|
|
|
|
*
|
2021-06-15 14:59:20 +02:00
|
|
|
* Sets oneArgContructorOid to the oid of the new constructor that can be used
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
* to cast from a range to a multirange.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
makeMultirangeConstructors(const char *name, Oid namespace,
|
|
|
|
Oid multirangeOid, Oid rangeOid, Oid rangeArrayOid,
|
2021-06-15 14:59:20 +02:00
|
|
|
Oid *oneArgContructorOid)
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
{
|
|
|
|
ObjectAddress myself,
|
|
|
|
referenced;
|
|
|
|
oidvector *argtypes;
|
|
|
|
Datum allParamTypes;
|
|
|
|
ArrayType *allParameterTypes;
|
|
|
|
Datum paramModes;
|
|
|
|
ArrayType *parameterModes;
|
|
|
|
|
|
|
|
referenced.classId = TypeRelationId;
|
|
|
|
referenced.objectId = multirangeOid;
|
|
|
|
referenced.objectSubId = 0;
|
|
|
|
|
|
|
|
/* 0-arg constructor - for empty multiranges */
|
|
|
|
argtypes = buildoidvector(NULL, 0);
|
|
|
|
myself = ProcedureCreate(name, /* name: same as multirange type */
|
|
|
|
namespace,
|
|
|
|
false, /* replace */
|
|
|
|
false, /* returns set */
|
|
|
|
multirangeOid, /* return type */
|
|
|
|
BOOTSTRAP_SUPERUSERID, /* proowner */
|
|
|
|
INTERNALlanguageId, /* language */
|
|
|
|
F_FMGR_INTERNAL_VALIDATOR,
|
|
|
|
"multirange_constructor0", /* prosrc */
|
|
|
|
NULL, /* probin */
|
2021-04-07 21:30:08 +02:00
|
|
|
NULL, /* prosqlbody */
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
PROKIND_FUNCTION,
|
|
|
|
false, /* security_definer */
|
|
|
|
false, /* leakproof */
|
2021-05-12 19:14:10 +02:00
|
|
|
true, /* isStrict */
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
PROVOLATILE_IMMUTABLE, /* volatility */
|
|
|
|
PROPARALLEL_SAFE, /* parallel safety */
|
|
|
|
argtypes, /* parameterTypes */
|
|
|
|
PointerGetDatum(NULL), /* allParameterTypes */
|
|
|
|
PointerGetDatum(NULL), /* parameterModes */
|
|
|
|
PointerGetDatum(NULL), /* parameterNames */
|
|
|
|
NIL, /* parameterDefaults */
|
|
|
|
PointerGetDatum(NULL), /* trftypes */
|
|
|
|
PointerGetDatum(NULL), /* proconfig */
|
|
|
|
InvalidOid, /* prosupport */
|
|
|
|
1.0, /* procost */
|
|
|
|
0.0); /* prorows */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Make the constructor internally-dependent on the multirange type so
|
|
|
|
* that they go away silently when the type is dropped. Note that pg_dump
|
|
|
|
* depends on this choice to avoid dumping the constructors.
|
|
|
|
*/
|
|
|
|
recordDependencyOn(&myself, &referenced, DEPENDENCY_INTERNAL);
|
|
|
|
pfree(argtypes);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* 1-arg constructor - for casts
|
|
|
|
*
|
|
|
|
* In theory we shouldn't need both this and the vararg (n-arg)
|
|
|
|
* constructor, but having a separate 1-arg function lets us define casts
|
|
|
|
* against it.
|
|
|
|
*/
|
|
|
|
argtypes = buildoidvector(&rangeOid, 1);
|
|
|
|
myself = ProcedureCreate(name, /* name: same as multirange type */
|
|
|
|
namespace,
|
|
|
|
false, /* replace */
|
|
|
|
false, /* returns set */
|
|
|
|
multirangeOid, /* return type */
|
|
|
|
BOOTSTRAP_SUPERUSERID, /* proowner */
|
|
|
|
INTERNALlanguageId, /* language */
|
|
|
|
F_FMGR_INTERNAL_VALIDATOR,
|
|
|
|
"multirange_constructor1", /* prosrc */
|
|
|
|
NULL, /* probin */
|
2021-04-07 21:30:08 +02:00
|
|
|
NULL, /* prosqlbody */
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
PROKIND_FUNCTION,
|
|
|
|
false, /* security_definer */
|
|
|
|
false, /* leakproof */
|
|
|
|
true, /* isStrict */
|
|
|
|
PROVOLATILE_IMMUTABLE, /* volatility */
|
|
|
|
PROPARALLEL_SAFE, /* parallel safety */
|
|
|
|
argtypes, /* parameterTypes */
|
|
|
|
PointerGetDatum(NULL), /* allParameterTypes */
|
|
|
|
PointerGetDatum(NULL), /* parameterModes */
|
|
|
|
PointerGetDatum(NULL), /* parameterNames */
|
|
|
|
NIL, /* parameterDefaults */
|
|
|
|
PointerGetDatum(NULL), /* trftypes */
|
|
|
|
PointerGetDatum(NULL), /* proconfig */
|
|
|
|
InvalidOid, /* prosupport */
|
|
|
|
1.0, /* procost */
|
|
|
|
0.0); /* prorows */
|
|
|
|
/* ditto */
|
|
|
|
recordDependencyOn(&myself, &referenced, DEPENDENCY_INTERNAL);
|
|
|
|
pfree(argtypes);
|
2021-06-15 14:59:20 +02:00
|
|
|
*oneArgContructorOid = myself.objectId;
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
|
|
|
|
/* n-arg constructor - vararg */
|
|
|
|
argtypes = buildoidvector(&rangeArrayOid, 1);
|
|
|
|
allParamTypes = ObjectIdGetDatum(rangeArrayOid);
|
|
|
|
allParameterTypes = construct_array(&allParamTypes,
|
|
|
|
1, OIDOID,
|
|
|
|
sizeof(Oid), true, 'i');
|
|
|
|
paramModes = CharGetDatum(FUNC_PARAM_VARIADIC);
|
|
|
|
parameterModes = construct_array(¶mModes, 1, CHAROID,
|
|
|
|
1, true, 'c');
|
|
|
|
myself = ProcedureCreate(name, /* name: same as multirange type */
|
|
|
|
namespace,
|
|
|
|
false, /* replace */
|
|
|
|
false, /* returns set */
|
|
|
|
multirangeOid, /* return type */
|
|
|
|
BOOTSTRAP_SUPERUSERID, /* proowner */
|
|
|
|
INTERNALlanguageId, /* language */
|
|
|
|
F_FMGR_INTERNAL_VALIDATOR,
|
|
|
|
"multirange_constructor2", /* prosrc */
|
|
|
|
NULL, /* probin */
|
2021-04-07 21:30:08 +02:00
|
|
|
NULL, /* prosqlbody */
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
PROKIND_FUNCTION,
|
|
|
|
false, /* security_definer */
|
|
|
|
false, /* leakproof */
|
2021-05-12 19:14:10 +02:00
|
|
|
true, /* isStrict */
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
PROVOLATILE_IMMUTABLE, /* volatility */
|
|
|
|
PROPARALLEL_SAFE, /* parallel safety */
|
|
|
|
argtypes, /* parameterTypes */
|
|
|
|
PointerGetDatum(allParameterTypes), /* allParameterTypes */
|
|
|
|
PointerGetDatum(parameterModes), /* parameterModes */
|
|
|
|
PointerGetDatum(NULL), /* parameterNames */
|
|
|
|
NIL, /* parameterDefaults */
|
|
|
|
PointerGetDatum(NULL), /* trftypes */
|
|
|
|
PointerGetDatum(NULL), /* proconfig */
|
|
|
|
InvalidOid, /* prosupport */
|
|
|
|
1.0, /* procost */
|
|
|
|
0.0); /* prorows */
|
|
|
|
/* ditto */
|
|
|
|
recordDependencyOn(&myself, &referenced, DEPENDENCY_INTERNAL);
|
|
|
|
pfree(argtypes);
|
|
|
|
pfree(allParameterTypes);
|
|
|
|
pfree(parameterModes);
|
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2021-06-15 14:59:20 +02:00
|
|
|
/*
|
|
|
|
* Create casts for the multirange type. The first cast makes multirange from
|
|
|
|
* range, and it's based on the single-argument constructor. The second cast
|
|
|
|
* makes an array of ranges from multirange.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
makeMultirangeCasts(const char *name, Oid namespace,
|
|
|
|
Oid multirangeOid, Oid rangeOid, Oid rangeArrayOid,
|
|
|
|
Oid singleArgContructorOid)
|
|
|
|
{
|
|
|
|
ObjectAddress myself,
|
|
|
|
referenced;
|
|
|
|
oidvector *argtypes;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create cast from range to multirange using the existing single-argument
|
|
|
|
* constructor procedure.
|
|
|
|
*/
|
|
|
|
CastCreate(rangeOid, multirangeOid, singleArgContructorOid, 'e', 'f',
|
|
|
|
DEPENDENCY_INTERNAL);
|
|
|
|
|
|
|
|
referenced.classId = TypeRelationId;
|
|
|
|
referenced.objectId = multirangeOid;
|
|
|
|
referenced.objectSubId = 0;
|
|
|
|
|
|
|
|
/* multirange_to_array() function */
|
|
|
|
argtypes = buildoidvector(&multirangeOid, 1);
|
|
|
|
myself = ProcedureCreate("multirange_to_array", /* name */
|
|
|
|
namespace,
|
|
|
|
false, /* replace */
|
|
|
|
false, /* returns set */
|
|
|
|
rangeArrayOid, /* return type */
|
|
|
|
BOOTSTRAP_SUPERUSERID, /* proowner */
|
|
|
|
INTERNALlanguageId, /* language */
|
|
|
|
F_FMGR_INTERNAL_VALIDATOR,
|
|
|
|
"multirange_to_array", /* prosrc */
|
|
|
|
NULL, /* probin */
|
|
|
|
NULL, /* prosqlbody */
|
|
|
|
PROKIND_FUNCTION,
|
|
|
|
false, /* security_definer */
|
|
|
|
false, /* leakproof */
|
|
|
|
true, /* isStrict */
|
|
|
|
PROVOLATILE_IMMUTABLE, /* volatility */
|
|
|
|
PROPARALLEL_SAFE, /* parallel safety */
|
|
|
|
argtypes, /* parameterTypes */
|
|
|
|
PointerGetDatum(NULL), /* allParameterTypes */
|
|
|
|
PointerGetDatum(NULL), /* parameterModes */
|
|
|
|
PointerGetDatum(NULL), /* parameterNames */
|
|
|
|
NIL, /* parameterDefaults */
|
|
|
|
PointerGetDatum(NULL), /* trftypes */
|
|
|
|
PointerGetDatum(NULL), /* proconfig */
|
|
|
|
InvalidOid, /* prosupport */
|
|
|
|
1.0, /* procost */
|
|
|
|
0.0); /* prorows */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Make the multirange_to_array() function internally-dependent on the
|
|
|
|
* multirange type so that they go away silently when the type is dropped.
|
|
|
|
*/
|
|
|
|
recordDependencyOn(&myself, &referenced, DEPENDENCY_INTERNAL);
|
|
|
|
pfree(argtypes);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create cast from multirange to the array of ranges using
|
|
|
|
* multirange_to_array() function.
|
|
|
|
*/
|
|
|
|
CastCreate(multirangeOid, rangeArrayOid, myself.objectId, 'e', 'f',
|
|
|
|
DEPENDENCY_INTERNAL);
|
|
|
|
}
|
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
/*
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
* Find suitable I/O and other support functions for a type.
|
2002-08-22 02:01:51 +02:00
|
|
|
*
|
2002-09-21 20:39:26 +02:00
|
|
|
* typeOid is the type's OID (which will already exist, if only as a shell
|
|
|
|
* type).
|
2002-04-15 07:22:04 +02:00
|
|
|
*/
|
2003-05-09 00:19:58 +02:00
|
|
|
|
2002-04-15 07:22:04 +02:00
|
|
|
static Oid
|
2003-05-09 00:19:58 +02:00
|
|
|
findTypeInputFunction(List *procname, Oid typeOid)
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
2005-03-29 05:01:32 +02:00
|
|
|
Oid argList[3];
|
2002-04-15 07:22:04 +02:00
|
|
|
Oid procOid;
|
Make contrib modules' installation scripts more secure.
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation. While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script. Therefore, make a number of changes
to make such situations more secure:
* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.
* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.
* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions. This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.
* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths. (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)
Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.
Also add documentation around these issues, to help extension authors
write secure installation scripts.
Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.
Security: CVE-2020-14350
2020-08-10 16:44:42 +02:00
|
|
|
Oid procOid2;
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2003-05-09 00:19:58 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Input functions can take a single argument of type CSTRING, or three
|
Make contrib modules' installation scripts more secure.
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation. While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script. Therefore, make a number of changes
to make such situations more secure:
* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.
* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.
* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions. This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.
* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths. (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)
Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.
Also add documentation around these issues, to help extension authors
write secure installation scripts.
Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.
Security: CVE-2020-14350
2020-08-10 16:44:42 +02:00
|
|
|
* arguments (string, typioparam OID, typmod). Whine about ambiguity if
|
|
|
|
* both forms exist.
|
2003-05-09 00:19:58 +02:00
|
|
|
*/
|
|
|
|
argList[0] = CSTRINGOID;
|
Make contrib modules' installation scripts more secure.
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation. While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script. Therefore, make a number of changes
to make such situations more secure:
* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.
* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.
* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions. This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.
* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths. (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)
Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.
Also add documentation around these issues, to help extension authors
write secure installation scripts.
Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.
Security: CVE-2020-14350
2020-08-10 16:44:42 +02:00
|
|
|
argList[1] = OIDOID;
|
|
|
|
argList[2] = INT4OID;
|
2003-05-09 00:19:58 +02:00
|
|
|
|
2003-07-04 04:51:34 +02:00
|
|
|
procOid = LookupFuncName(procname, 1, argList, true);
|
Make contrib modules' installation scripts more secure.
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation. While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script. Therefore, make a number of changes
to make such situations more secure:
* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.
* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.
* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions. This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.
* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths. (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)
Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.
Also add documentation around these issues, to help extension authors
write secure installation scripts.
Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.
Security: CVE-2020-14350
2020-08-10 16:44:42 +02:00
|
|
|
procOid2 = LookupFuncName(procname, 3, argList, true);
|
|
|
|
if (OidIsValid(procOid))
|
2002-04-15 07:22:04 +02:00
|
|
|
{
|
Make contrib modules' installation scripts more secure.
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation. While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script. Therefore, make a number of changes
to make such situations more secure:
* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.
* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.
* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions. This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.
* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths. (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)
Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.
Also add documentation around these issues, to help extension authors
write secure installation scripts.
Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.
Security: CVE-2020-14350
2020-08-10 16:44:42 +02:00
|
|
|
if (OidIsValid(procOid2))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_AMBIGUOUS_FUNCTION),
|
|
|
|
errmsg("type input function %s has multiple matches",
|
|
|
|
NameListToString(procname))));
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
procOid = procOid2;
|
|
|
|
/* If not found, reference the 1-argument signature in error msg */
|
2020-03-05 21:48:56 +01:00
|
|
|
if (!OidIsValid(procOid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_FUNCTION),
|
|
|
|
errmsg("function %s does not exist",
|
|
|
|
func_signature_string(procname, 1, NIL, argList))));
|
2003-05-09 00:19:58 +02:00
|
|
|
}
|
|
|
|
|
Make contrib modules' installation scripts more secure.
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation. While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script. Therefore, make a number of changes
to make such situations more secure:
* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.
* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.
* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions. This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.
* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths. (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)
Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.
Also add documentation around these issues, to help extension authors
write secure installation scripts.
Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.
Security: CVE-2020-14350
2020-08-10 16:44:42 +02:00
|
|
|
/* Input functions must return the target type. */
|
2020-03-05 21:48:56 +01:00
|
|
|
if (get_func_rettype(procOid) != typeOid)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type input function %s must return type %s",
|
|
|
|
NameListToString(procname), format_type_be(typeOid))));
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2020-03-06 18:19:29 +01:00
|
|
|
/*
|
|
|
|
* Print warnings if any of the type's I/O functions are marked volatile.
|
|
|
|
* There is a general assumption that I/O functions are stable or
|
|
|
|
* immutable; this allows us for example to mark record_in/record_out
|
|
|
|
* stable rather than volatile. Ideally we would throw errors not just
|
|
|
|
* warnings here; but since this check is new as of 9.5, and since the
|
|
|
|
* volatility marking might be just an error-of-omission and not a true
|
|
|
|
* indication of how the function behaves, we'll let it pass as a warning
|
|
|
|
* for now.
|
|
|
|
*/
|
|
|
|
if (func_volatile(procOid) == PROVOLATILE_VOLATILE)
|
|
|
|
ereport(WARNING,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type input function %s should not be volatile",
|
|
|
|
NameListToString(procname))));
|
|
|
|
|
2020-03-05 21:48:56 +01:00
|
|
|
return procOid;
|
2003-05-09 00:19:58 +02:00
|
|
|
}
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2003-05-09 00:19:58 +02:00
|
|
|
static Oid
|
|
|
|
findTypeOutputFunction(List *procname, Oid typeOid)
|
|
|
|
{
|
2005-05-01 20:56:19 +02:00
|
|
|
Oid argList[1];
|
2003-05-09 00:19:58 +02:00
|
|
|
Oid procOid;
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2003-05-09 00:19:58 +02:00
|
|
|
/*
|
2020-03-05 21:48:56 +01:00
|
|
|
* Output functions always take a single argument of the type and return
|
|
|
|
* cstring.
|
2003-05-09 00:19:58 +02:00
|
|
|
*/
|
|
|
|
argList[0] = typeOid;
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2003-07-04 04:51:34 +02:00
|
|
|
procOid = LookupFuncName(procname, 1, argList, true);
|
2020-03-05 21:48:56 +01:00
|
|
|
if (!OidIsValid(procOid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_FUNCTION),
|
|
|
|
errmsg("function %s does not exist",
|
|
|
|
func_signature_string(procname, 1, NIL, argList))));
|
2002-04-15 07:22:04 +02:00
|
|
|
|
2020-03-05 21:48:56 +01:00
|
|
|
if (get_func_rettype(procOid) != CSTRINGOID)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type output function %s must return type %s",
|
|
|
|
NameListToString(procname), "cstring")));
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2020-03-06 18:19:29 +01:00
|
|
|
/* Just a warning for now, per comments in findTypeInputFunction */
|
|
|
|
if (func_volatile(procOid) == PROVOLATILE_VOLATILE)
|
|
|
|
ereport(WARNING,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type output function %s should not be volatile",
|
|
|
|
NameListToString(procname))));
|
|
|
|
|
2020-03-05 21:48:56 +01:00
|
|
|
return procOid;
|
2003-05-09 00:19:58 +02:00
|
|
|
}
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2003-05-09 00:19:58 +02:00
|
|
|
static Oid
|
|
|
|
findTypeReceiveFunction(List *procname, Oid typeOid)
|
|
|
|
{
|
2005-07-10 23:14:00 +02:00
|
|
|
Oid argList[3];
|
2003-05-09 00:19:58 +02:00
|
|
|
Oid procOid;
|
Make contrib modules' installation scripts more secure.
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation. While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script. Therefore, make a number of changes
to make such situations more secure:
* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.
* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.
* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions. This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.
* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths. (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)
Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.
Also add documentation around these issues, to help extension authors
write secure installation scripts.
Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.
Security: CVE-2020-14350
2020-08-10 16:44:42 +02:00
|
|
|
Oid procOid2;
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2003-05-09 00:19:58 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Receive functions can take a single argument of type INTERNAL, or three
|
Make contrib modules' installation scripts more secure.
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation. While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script. Therefore, make a number of changes
to make such situations more secure:
* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.
* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.
* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions. This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.
* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths. (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)
Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.
Also add documentation around these issues, to help extension authors
write secure installation scripts.
Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.
Security: CVE-2020-14350
2020-08-10 16:44:42 +02:00
|
|
|
* arguments (internal, typioparam OID, typmod). Whine about ambiguity if
|
|
|
|
* both forms exist.
|
2003-05-09 00:19:58 +02:00
|
|
|
*/
|
|
|
|
argList[0] = INTERNALOID;
|
Make contrib modules' installation scripts more secure.
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation. While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script. Therefore, make a number of changes
to make such situations more secure:
* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.
* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.
* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions. This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.
* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths. (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)
Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.
Also add documentation around these issues, to help extension authors
write secure installation scripts.
Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.
Security: CVE-2020-14350
2020-08-10 16:44:42 +02:00
|
|
|
argList[1] = OIDOID;
|
|
|
|
argList[2] = INT4OID;
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2003-07-04 04:51:34 +02:00
|
|
|
procOid = LookupFuncName(procname, 1, argList, true);
|
Make contrib modules' installation scripts more secure.
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation. While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script. Therefore, make a number of changes
to make such situations more secure:
* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.
* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.
* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions. This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.
* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths. (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)
Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.
Also add documentation around these issues, to help extension authors
write secure installation scripts.
Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.
Security: CVE-2020-14350
2020-08-10 16:44:42 +02:00
|
|
|
procOid2 = LookupFuncName(procname, 3, argList, true);
|
|
|
|
if (OidIsValid(procOid))
|
2020-03-05 21:48:56 +01:00
|
|
|
{
|
Make contrib modules' installation scripts more secure.
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation. While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script. Therefore, make a number of changes
to make such situations more secure:
* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.
* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.
* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions. This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.
* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths. (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)
Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.
Also add documentation around these issues, to help extension authors
write secure installation scripts.
Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.
Security: CVE-2020-14350
2020-08-10 16:44:42 +02:00
|
|
|
if (OidIsValid(procOid2))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_AMBIGUOUS_FUNCTION),
|
|
|
|
errmsg("type receive function %s has multiple matches",
|
|
|
|
NameListToString(procname))));
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
procOid = procOid2;
|
|
|
|
/* If not found, reference the 1-argument signature in error msg */
|
2020-03-05 21:48:56 +01:00
|
|
|
if (!OidIsValid(procOid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_FUNCTION),
|
|
|
|
errmsg("function %s does not exist",
|
|
|
|
func_signature_string(procname, 1, NIL, argList))));
|
|
|
|
}
|
2003-05-10 01:01:45 +02:00
|
|
|
|
Make contrib modules' installation scripts more secure.
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation. While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script. Therefore, make a number of changes
to make such situations more secure:
* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.
* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.
* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions. This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.
* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths. (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)
Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.
Also add documentation around these issues, to help extension authors
write secure installation scripts.
Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.
Security: CVE-2020-14350
2020-08-10 16:44:42 +02:00
|
|
|
/* Receive functions must return the target type. */
|
2020-03-05 21:48:56 +01:00
|
|
|
if (get_func_rettype(procOid) != typeOid)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type receive function %s must return type %s",
|
|
|
|
NameListToString(procname), format_type_be(typeOid))));
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2020-03-06 18:19:29 +01:00
|
|
|
/* Just a warning for now, per comments in findTypeInputFunction */
|
|
|
|
if (func_volatile(procOid) == PROVOLATILE_VOLATILE)
|
|
|
|
ereport(WARNING,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type receive function %s should not be volatile",
|
|
|
|
NameListToString(procname))));
|
|
|
|
|
2020-03-05 21:48:56 +01:00
|
|
|
return procOid;
|
2003-05-09 00:19:58 +02:00
|
|
|
}
|
2002-09-21 20:39:26 +02:00
|
|
|
|
2003-05-09 00:19:58 +02:00
|
|
|
static Oid
|
|
|
|
findTypeSendFunction(List *procname, Oid typeOid)
|
|
|
|
{
|
2005-05-01 20:56:19 +02:00
|
|
|
Oid argList[1];
|
2003-05-09 00:19:58 +02:00
|
|
|
Oid procOid;
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2003-05-09 00:19:58 +02:00
|
|
|
/*
|
2020-03-05 21:48:56 +01:00
|
|
|
* Send functions always take a single argument of the type and return
|
|
|
|
* bytea.
|
2003-05-09 00:19:58 +02:00
|
|
|
*/
|
|
|
|
argList[0] = typeOid;
|
|
|
|
|
2003-07-04 04:51:34 +02:00
|
|
|
procOid = LookupFuncName(procname, 1, argList, true);
|
2020-03-05 21:48:56 +01:00
|
|
|
if (!OidIsValid(procOid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_FUNCTION),
|
|
|
|
errmsg("function %s does not exist",
|
|
|
|
func_signature_string(procname, 1, NIL, argList))));
|
2003-05-09 00:19:58 +02:00
|
|
|
|
2020-03-05 21:48:56 +01:00
|
|
|
if (get_func_rettype(procOid) != BYTEAOID)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type send function %s must return type %s",
|
|
|
|
NameListToString(procname), "bytea")));
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2020-03-06 18:19:29 +01:00
|
|
|
/* Just a warning for now, per comments in findTypeInputFunction */
|
|
|
|
if (func_volatile(procOid) == PROVOLATILE_VOLATILE)
|
|
|
|
ereport(WARNING,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type send function %s should not be volatile",
|
|
|
|
NameListToString(procname))));
|
|
|
|
|
2020-03-05 21:48:56 +01:00
|
|
|
return procOid;
|
2002-04-15 07:22:04 +02:00
|
|
|
}
|
2002-08-15 18:36:08 +02:00
|
|
|
|
2006-12-30 22:21:56 +01:00
|
|
|
static Oid
|
|
|
|
findTypeTypmodinFunction(List *procname)
|
|
|
|
{
|
|
|
|
Oid argList[1];
|
|
|
|
Oid procOid;
|
|
|
|
|
|
|
|
/*
|
2007-06-15 22:56:52 +02:00
|
|
|
* typmodin functions always take one cstring[] argument and return int4.
|
2006-12-30 22:21:56 +01:00
|
|
|
*/
|
2007-06-15 22:56:52 +02:00
|
|
|
argList[0] = CSTRINGARRAYOID;
|
2006-12-30 22:21:56 +01:00
|
|
|
|
|
|
|
procOid = LookupFuncName(procname, 1, argList, true);
|
|
|
|
if (!OidIsValid(procOid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_FUNCTION),
|
|
|
|
errmsg("function %s does not exist",
|
2009-10-08 04:39:25 +02:00
|
|
|
func_signature_string(procname, 1, NIL, argList))));
|
2006-12-30 22:21:56 +01:00
|
|
|
|
|
|
|
if (get_func_rettype(procOid) != INT4OID)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2016-04-01 18:35:48 +02:00
|
|
|
errmsg("typmod_in function %s must return type %s",
|
2016-03-28 19:12:00 +02:00
|
|
|
NameListToString(procname), "integer")));
|
2006-12-30 22:21:56 +01:00
|
|
|
|
2020-03-06 18:19:29 +01:00
|
|
|
/* Just a warning for now, per comments in findTypeInputFunction */
|
|
|
|
if (func_volatile(procOid) == PROVOLATILE_VOLATILE)
|
|
|
|
ereport(WARNING,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type modifier input function %s should not be volatile",
|
|
|
|
NameListToString(procname))));
|
|
|
|
|
2006-12-30 22:21:56 +01:00
|
|
|
return procOid;
|
|
|
|
}
|
|
|
|
|
|
|
|
static Oid
|
|
|
|
findTypeTypmodoutFunction(List *procname)
|
|
|
|
{
|
|
|
|
Oid argList[1];
|
|
|
|
Oid procOid;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* typmodout functions always take one int4 argument and return cstring.
|
|
|
|
*/
|
|
|
|
argList[0] = INT4OID;
|
|
|
|
|
|
|
|
procOid = LookupFuncName(procname, 1, argList, true);
|
|
|
|
if (!OidIsValid(procOid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_FUNCTION),
|
|
|
|
errmsg("function %s does not exist",
|
2009-10-08 04:39:25 +02:00
|
|
|
func_signature_string(procname, 1, NIL, argList))));
|
2006-12-30 22:21:56 +01:00
|
|
|
|
|
|
|
if (get_func_rettype(procOid) != CSTRINGOID)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2016-04-01 18:35:48 +02:00
|
|
|
errmsg("typmod_out function %s must return type %s",
|
2016-03-28 19:12:00 +02:00
|
|
|
NameListToString(procname), "cstring")));
|
2006-12-30 22:21:56 +01:00
|
|
|
|
2020-03-06 18:19:29 +01:00
|
|
|
/* Just a warning for now, per comments in findTypeInputFunction */
|
|
|
|
if (func_volatile(procOid) == PROVOLATILE_VOLATILE)
|
|
|
|
ereport(WARNING,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type modifier output function %s should not be volatile",
|
|
|
|
NameListToString(procname))));
|
|
|
|
|
2006-12-30 22:21:56 +01:00
|
|
|
return procOid;
|
|
|
|
}
|
|
|
|
|
2004-02-13 00:41:04 +01:00
|
|
|
static Oid
|
|
|
|
findTypeAnalyzeFunction(List *procname, Oid typeOid)
|
|
|
|
{
|
2005-03-29 05:01:32 +02:00
|
|
|
Oid argList[1];
|
2004-02-13 00:41:04 +01:00
|
|
|
Oid procOid;
|
|
|
|
|
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Analyze functions always take one INTERNAL argument and return bool.
|
2004-02-13 00:41:04 +01:00
|
|
|
*/
|
|
|
|
argList[0] = INTERNALOID;
|
|
|
|
|
|
|
|
procOid = LookupFuncName(procname, 1, argList, true);
|
|
|
|
if (!OidIsValid(procOid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_FUNCTION),
|
|
|
|
errmsg("function %s does not exist",
|
2009-10-08 04:39:25 +02:00
|
|
|
func_signature_string(procname, 1, NIL, argList))));
|
2004-02-13 00:41:04 +01:00
|
|
|
|
|
|
|
if (get_func_rettype(procOid) != BOOLOID)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2016-06-10 00:02:36 +02:00
|
|
|
errmsg("type analyze function %s must return type %s",
|
|
|
|
NameListToString(procname), "boolean")));
|
2004-02-13 00:41:04 +01:00
|
|
|
|
|
|
|
return procOid;
|
|
|
|
}
|
|
|
|
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
static Oid
|
|
|
|
findTypeSubscriptingFunction(List *procname, Oid typeOid)
|
|
|
|
{
|
|
|
|
Oid argList[1];
|
|
|
|
Oid procOid;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Subscripting support functions always take one INTERNAL argument and
|
|
|
|
* return INTERNAL. (The argument is not used, but we must have it to
|
|
|
|
* maintain type safety.)
|
|
|
|
*/
|
|
|
|
argList[0] = INTERNALOID;
|
|
|
|
|
|
|
|
procOid = LookupFuncName(procname, 1, argList, true);
|
|
|
|
if (!OidIsValid(procOid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_FUNCTION),
|
|
|
|
errmsg("function %s does not exist",
|
|
|
|
func_signature_string(procname, 1, NIL, argList))));
|
|
|
|
|
|
|
|
if (get_func_rettype(procOid) != INTERNALOID)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("type subscripting function %s must return type %s",
|
|
|
|
NameListToString(procname), "internal")));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We disallow array_subscript_handler() from being selected explicitly,
|
|
|
|
* since that must only be applied to autogenerated array types.
|
|
|
|
*/
|
|
|
|
if (procOid == F_ARRAY_SUBSCRIPT_HANDLER)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("user-defined types cannot use subscripting function %s",
|
|
|
|
NameListToString(procname))));
|
|
|
|
|
|
|
|
return procOid;
|
|
|
|
}
|
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
/*
|
|
|
|
* Find suitable support functions and opclasses for a range type.
|
|
|
|
*/
|
|
|
|
|
2011-11-03 12:16:28 +01:00
|
|
|
/*
|
|
|
|
* Find named btree opclass for subtype, or default btree opclass if
|
2011-11-21 22:19:53 +01:00
|
|
|
* opcname is NIL.
|
2011-11-03 12:16:28 +01:00
|
|
|
*/
|
|
|
|
static Oid
|
|
|
|
findRangeSubOpclass(List *opcname, Oid subtype)
|
|
|
|
{
|
|
|
|
Oid opcid;
|
2011-11-21 22:19:53 +01:00
|
|
|
Oid opInputType;
|
|
|
|
|
|
|
|
if (opcname != NIL)
|
|
|
|
{
|
|
|
|
opcid = get_opclass_oid(BTREE_AM_OID, opcname, false);
|
2011-11-03 12:16:28 +01:00
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
/*
|
|
|
|
* Verify that the operator class accepts this datatype. Note we will
|
|
|
|
* accept binary compatibility.
|
|
|
|
*/
|
|
|
|
opInputType = get_opclass_input_type(opcid);
|
|
|
|
if (!IsBinaryCoercible(subtype, opInputType))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DATATYPE_MISMATCH),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("operator class \"%s\" does not accept data type %s",
|
|
|
|
NameListToString(opcname),
|
|
|
|
format_type_be(subtype))));
|
2011-11-21 22:19:53 +01:00
|
|
|
}
|
|
|
|
else
|
2011-11-03 12:16:28 +01:00
|
|
|
{
|
|
|
|
opcid = GetDefaultOpClass(subtype, BTREE_AM_OID);
|
|
|
|
if (!OidIsValid(opcid))
|
|
|
|
{
|
2018-04-21 01:04:54 +02:00
|
|
|
/* We spell the error message identically to ResolveOpClass */
|
2011-11-03 12:16:28 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
2011-11-21 22:19:53 +01:00
|
|
|
errmsg("data type %s has no default operator class for access method \"%s\"",
|
|
|
|
format_type_be(subtype), "btree"),
|
|
|
|
errhint("You must specify an operator class for the range type or define a default operator class for the subtype.")));
|
2011-11-03 12:16:28 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return opcid;
|
|
|
|
}
|
|
|
|
|
|
|
|
static Oid
|
2011-11-21 22:19:53 +01:00
|
|
|
findRangeCanonicalFunction(List *procname, Oid typeOid)
|
2011-11-03 12:16:28 +01:00
|
|
|
{
|
2011-11-21 22:19:53 +01:00
|
|
|
Oid argList[1];
|
2011-11-03 12:16:28 +01:00
|
|
|
Oid procOid;
|
2011-11-23 18:45:49 +01:00
|
|
|
AclResult aclresult;
|
2011-11-03 12:16:28 +01:00
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
/*
|
|
|
|
* Range canonical functions must take and return the range type, and must
|
|
|
|
* be immutable.
|
|
|
|
*/
|
2011-11-03 12:16:28 +01:00
|
|
|
argList[0] = typeOid;
|
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
procOid = LookupFuncName(procname, 1, argList, true);
|
2011-11-03 12:16:28 +01:00
|
|
|
|
|
|
|
if (!OidIsValid(procOid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_FUNCTION),
|
|
|
|
errmsg("function %s does not exist",
|
2011-11-21 22:19:53 +01:00
|
|
|
func_signature_string(procname, 1, NIL, argList))));
|
2011-11-03 12:16:28 +01:00
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
if (get_func_rettype(procOid) != typeOid)
|
2011-11-03 12:16:28 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2011-11-21 22:19:53 +01:00
|
|
|
errmsg("range canonical function %s must return range type",
|
|
|
|
func_signature_string(procname, 1, NIL, argList))));
|
2011-11-03 12:16:28 +01:00
|
|
|
|
|
|
|
if (func_volatile(procOid) != PROVOLATILE_IMMUTABLE)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2011-11-21 22:19:53 +01:00
|
|
|
errmsg("range canonical function %s must be immutable",
|
|
|
|
func_signature_string(procname, 1, NIL, argList))));
|
2011-11-03 12:16:28 +01:00
|
|
|
|
2011-11-23 18:45:49 +01:00
|
|
|
/* Also, range type's creator must have permission to call function */
|
|
|
|
aclresult = pg_proc_aclcheck(procOid, GetUserId(), ACL_EXECUTE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(procOid));
|
2011-11-23 18:45:49 +01:00
|
|
|
|
2011-11-03 12:16:28 +01:00
|
|
|
return procOid;
|
|
|
|
}
|
|
|
|
|
|
|
|
static Oid
|
2011-11-21 22:19:53 +01:00
|
|
|
findRangeSubtypeDiffFunction(List *procname, Oid subtype)
|
2011-11-03 12:16:28 +01:00
|
|
|
{
|
2011-11-21 22:19:53 +01:00
|
|
|
Oid argList[2];
|
2011-11-03 12:16:28 +01:00
|
|
|
Oid procOid;
|
2011-11-23 18:45:49 +01:00
|
|
|
AclResult aclresult;
|
2011-11-03 12:16:28 +01:00
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
/*
|
|
|
|
* Range subtype diff functions must take two arguments of the subtype,
|
|
|
|
* must return float8, and must be immutable.
|
|
|
|
*/
|
|
|
|
argList[0] = subtype;
|
|
|
|
argList[1] = subtype;
|
2011-11-03 12:16:28 +01:00
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
procOid = LookupFuncName(procname, 2, argList, true);
|
2011-11-03 12:16:28 +01:00
|
|
|
|
|
|
|
if (!OidIsValid(procOid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_FUNCTION),
|
|
|
|
errmsg("function %s does not exist",
|
2011-11-21 22:19:53 +01:00
|
|
|
func_signature_string(procname, 2, NIL, argList))));
|
2011-11-03 12:16:28 +01:00
|
|
|
|
2011-11-21 22:19:53 +01:00
|
|
|
if (get_func_rettype(procOid) != FLOAT8OID)
|
2011-11-03 12:16:28 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2016-04-01 18:35:48 +02:00
|
|
|
errmsg("range subtype diff function %s must return type %s",
|
2016-03-28 19:12:00 +02:00
|
|
|
func_signature_string(procname, 2, NIL, argList),
|
|
|
|
"double precision")));
|
2011-11-03 12:16:28 +01:00
|
|
|
|
|
|
|
if (func_volatile(procOid) != PROVOLATILE_IMMUTABLE)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
2011-11-21 22:19:53 +01:00
|
|
|
errmsg("range subtype diff function %s must be immutable",
|
|
|
|
func_signature_string(procname, 2, NIL, argList))));
|
2011-11-03 12:16:28 +01:00
|
|
|
|
2011-11-23 18:45:49 +01:00
|
|
|
/* Also, range type's creator must have permission to call function */
|
|
|
|
aclresult = pg_proc_aclcheck(procOid, GetUserId(), ACL_EXECUTE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(procOid));
|
2011-11-23 18:45:49 +01:00
|
|
|
|
2011-11-03 12:16:28 +01:00
|
|
|
return procOid;
|
|
|
|
}
|
|
|
|
|
2009-12-24 23:09:24 +01:00
|
|
|
/*
|
|
|
|
* AssignTypeArrayOid
|
|
|
|
*
|
|
|
|
* Pre-assign the type's array OID for use in pg_type.typarray
|
|
|
|
*/
|
|
|
|
Oid
|
|
|
|
AssignTypeArrayOid(void)
|
|
|
|
{
|
2010-02-26 03:01:40 +01:00
|
|
|
Oid type_array_oid;
|
2009-12-24 23:09:24 +01:00
|
|
|
|
2014-08-26 04:19:05 +02:00
|
|
|
/* Use binary-upgrade override for pg_type.typarray? */
|
|
|
|
if (IsBinaryUpgrade)
|
2009-12-24 23:09:24 +01:00
|
|
|
{
|
2014-08-26 04:19:05 +02:00
|
|
|
if (!OidIsValid(binary_upgrade_next_array_pg_type_oid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("pg_type array OID value not set when in binary upgrade mode")));
|
|
|
|
|
2011-01-08 03:25:34 +01:00
|
|
|
type_array_oid = binary_upgrade_next_array_pg_type_oid;
|
|
|
|
binary_upgrade_next_array_pg_type_oid = InvalidOid;
|
2009-12-24 23:09:24 +01:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2019-01-21 19:32:19 +01:00
|
|
|
Relation pg_type = table_open(TypeRelationId, AccessShareLock);
|
2009-12-24 23:09:24 +01:00
|
|
|
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
type_array_oid = GetNewOidWithIndex(pg_type, TypeOidIndexId,
|
|
|
|
Anum_pg_type_oid);
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(pg_type, AccessShareLock);
|
2009-12-24 23:09:24 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
return type_array_oid;
|
|
|
|
}
|
|
|
|
|
Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.
Since v14, each range type automatically gets a corresponding multirange
datatype. There are both manual and automatic mechanisms for naming multirange
types. Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE. Otherwise, a multirange type name is generated
automatically. If the range type name contains "range" then we change that to
"multirange". Otherwise, we add "_multirange" to the end.
Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids. Altogether this format allows fetching a particular range by its index
in O(n).
Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.
Catversion is bumped.
Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 05:20:33 +01:00
|
|
|
/*
|
|
|
|
* AssignTypeMultirangeOid
|
|
|
|
*
|
|
|
|
* Pre-assign the range type's multirange OID for use in pg_type.oid
|
|
|
|
*/
|
|
|
|
Oid
|
|
|
|
AssignTypeMultirangeOid(void)
|
|
|
|
{
|
|
|
|
Oid type_multirange_oid;
|
|
|
|
|
|
|
|
/* Use binary-upgrade override for pg_type.oid? */
|
|
|
|
if (IsBinaryUpgrade)
|
|
|
|
{
|
|
|
|
if (!OidIsValid(binary_upgrade_next_mrng_pg_type_oid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("pg_type multirange OID value not set when in binary upgrade mode")));
|
|
|
|
|
|
|
|
type_multirange_oid = binary_upgrade_next_mrng_pg_type_oid;
|
|
|
|
binary_upgrade_next_mrng_pg_type_oid = InvalidOid;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
Relation pg_type = table_open(TypeRelationId, AccessShareLock);
|
|
|
|
|
|
|
|
type_multirange_oid = GetNewOidWithIndex(pg_type, TypeOidIndexId,
|
|
|
|
Anum_pg_type_oid);
|
|
|
|
table_close(pg_type, AccessShareLock);
|
|
|
|
}
|
|
|
|
|
|
|
|
return type_multirange_oid;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* AssignTypeMultirangeArrayOid
|
|
|
|
*
|
|
|
|
* Pre-assign the range type's multirange array OID for use in pg_type.typarray
|
|
|
|
*/
|
|
|
|
Oid
|
|
|
|
AssignTypeMultirangeArrayOid(void)
|
|
|
|
{
|
|
|
|
Oid type_multirange_array_oid;
|
|
|
|
|
|
|
|
/* Use binary-upgrade override for pg_type.oid? */
|
|
|
|
if (IsBinaryUpgrade)
|
|
|
|
{
|
|
|
|
if (!OidIsValid(binary_upgrade_next_mrng_array_pg_type_oid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("pg_type multirange array OID value not set when in binary upgrade mode")));
|
|
|
|
|
|
|
|
type_multirange_array_oid = binary_upgrade_next_mrng_array_pg_type_oid;
|
|
|
|
binary_upgrade_next_mrng_array_pg_type_oid = InvalidOid;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
Relation pg_type = table_open(TypeRelationId, AccessShareLock);
|
|
|
|
|
|
|
|
type_multirange_array_oid = GetNewOidWithIndex(pg_type, TypeOidIndexId,
|
|
|
|
Anum_pg_type_oid);
|
|
|
|
table_close(pg_type, AccessShareLock);
|
|
|
|
}
|
|
|
|
|
|
|
|
return type_multirange_array_oid;
|
|
|
|
}
|
|
|
|
|
2002-08-22 02:01:51 +02:00
|
|
|
|
2002-08-15 18:36:08 +02:00
|
|
|
/*-------------------------------------------------------------------
|
|
|
|
* DefineCompositeType
|
|
|
|
*
|
|
|
|
* Create a Composite Type relation.
|
|
|
|
* `DefineRelation' does all the work, we just provide the correct
|
|
|
|
* arguments!
|
|
|
|
*
|
|
|
|
* If the relation already exists, then 'DefineRelation' will abort
|
|
|
|
* the xact...
|
|
|
|
*
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
* Return type is the new type's object address.
|
2002-08-15 18:36:08 +02:00
|
|
|
*-------------------------------------------------------------------
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2012-02-26 14:22:27 +01:00
|
|
|
DefineCompositeType(RangeVar *typevar, List *coldeflist)
|
2002-08-15 18:36:08 +02:00
|
|
|
{
|
|
|
|
CreateStmt *createStmt = makeNode(CreateStmt);
|
2010-01-20 06:47:09 +01:00
|
|
|
Oid old_type_oid;
|
|
|
|
Oid typeNamespace;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2002-08-15 18:36:08 +02:00
|
|
|
|
|
|
|
/*
|
2004-08-29 07:07:03 +02:00
|
|
|
* now set the parameters for keys/inheritance etc. All of these are
|
|
|
|
* uninteresting for composite types...
|
2002-08-15 18:36:08 +02:00
|
|
|
*/
|
2012-02-26 14:22:27 +01:00
|
|
|
createStmt->relation = typevar;
|
2002-08-15 18:36:08 +02:00
|
|
|
createStmt->tableElts = coldeflist;
|
|
|
|
createStmt->inhRelations = NIL;
|
|
|
|
createStmt->constraints = NIL;
|
Clean up the mess around EXPLAIN and materialized views.
Revert the matview-related changes in explain.c's API, as per recent
complaint from Robert Haas. The reason for these appears to have been
principally some ill-considered choices around having intorel_startup do
what ought to be parse-time checking, plus a poor arrangement for passing
it the view parsetree it needs to store into pg_rewrite when creating a
materialized view. Do the latter by having parse analysis stick a copy
into the IntoClause, instead of doing it at runtime. (On the whole,
I seriously question the choice to represent CREATE MATERIALIZED VIEW as a
variant of SELECT INTO/CREATE TABLE AS, because that means injecting even
more complexity into what was already a horrid legacy kluge. However,
I didn't go so far as to rethink that choice ... yet.)
I also moved several error checks into matview parse analysis, and
made the check for external Params in a matview more accurate.
In passing, clean things up a bit more around interpretOidsOption(),
and fix things so that we can use that to force no-oids for views,
sequences, etc, thereby eliminating the need to cons up "oids = false"
options when creating them.
catversion bump due to change in IntoClause. (I wonder though if we
really need readfuncs/outfuncs support for IntoClause anymore.)
2013-04-13 01:25:20 +02:00
|
|
|
createStmt->options = NIL;
|
2002-11-11 23:19:25 +01:00
|
|
|
createStmt->oncommit = ONCOMMIT_NOOP;
|
2004-06-18 08:14:31 +02:00
|
|
|
createStmt->tablespacename = NULL;
|
2010-07-26 01:21:22 +02:00
|
|
|
createStmt->if_not_exists = false;
|
2002-08-15 18:36:08 +02:00
|
|
|
|
|
|
|
/*
|
2010-02-26 03:01:40 +01:00
|
|
|
* Check for collision with an existing type name. If there is one and
|
|
|
|
* it's an autogenerated array, we can rename it out of the way. This
|
|
|
|
* check is here mainly to get a better error message about a "type"
|
|
|
|
* instead of below about a "relation".
|
2010-01-20 06:47:09 +01:00
|
|
|
*/
|
Prevent adding relations to a concurrently dropped schema.
In the previous coding, it was possible for a relation to be created
via CREATE TABLE, CREATE VIEW, CREATE SEQUENCE, CREATE FOREIGN TABLE,
etc. in a schema while that schema was meanwhile being concurrently
dropped. This led to a pg_class entry with an invalid relnamespace
value. The same problem could occur if a relation was moved using
ALTER .. SET SCHEMA while the target schema was being concurrently
dropped. This patch prevents both of those scenarios by locking the
schema to which the relation is being added using AccessShareLock,
which conflicts with the AccessExclusiveLock taken by DROP.
As a desirable side effect, this also prevents the use of CREATE OR
REPLACE VIEW to queue for an AccessExclusiveLock on a relation on which
you have no rights: that will now fail immediately with a permissions
error, before trying to obtain a lock.
We need similar protection for all other object types, but as everything
other than relations uses a slightly different set of code paths, I'm
leaving that for a separate commit.
Original complaint (as far as I could find) about CREATE by Nikhil
Sontakke; risk for ALTER .. SET SCHEMA pointed out by Tom Lane;
further details by Dan Farina; patch by me; review by Hitoshi Harada.
2012-01-16 15:34:21 +01:00
|
|
|
typeNamespace = RangeVarGetAndCheckCreationNamespace(createStmt->relation,
|
|
|
|
NoLock, NULL);
|
Fix bugs in relpersistence handling during table creation.
Unlike the relistemp field which it replaced, relpersistence must be
set correctly quite early during the table creation process, as we
rely on it quite early on for a number of purposes, including security
checks. Normally, this is set based on whether the user enters CREATE
TABLE, CREATE UNLOGGED TABLE, or CREATE TEMPORARY TABLE, but a
relation may also be made implicitly temporary by creating it in
pg_temp. This patch fixes the handling of that case, and also
disables creation of unlogged tables in temporary tablespace (such
table indeed skip WAL-logging, but we reject an explicit
specification) and creation of relations in the temporary schemas of
other sessions (which is not very sensible, and didn't work right
anyway).
Report by Amit Khandekar.
2011-07-03 23:34:47 +02:00
|
|
|
RangeVarAdjustRelationPersistence(createStmt->relation, typeNamespace);
|
2010-02-14 19:42:19 +01:00
|
|
|
old_type_oid =
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
GetSysCacheOid2(TYPENAMENSP, Anum_pg_type_oid,
|
2010-02-14 19:42:19 +01:00
|
|
|
CStringGetDatum(createStmt->relation->relname),
|
|
|
|
ObjectIdGetDatum(typeNamespace));
|
2010-01-20 06:47:09 +01:00
|
|
|
if (OidIsValid(old_type_oid))
|
|
|
|
{
|
|
|
|
if (!moveArrayTypeName(old_type_oid, createStmt->relation->relname, typeNamespace))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
|
|
|
errmsg("type \"%s\" already exists", createStmt->relation->relname)));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Finally create the relation. This also creates the type.
|
2002-08-15 18:36:08 +02:00
|
|
|
*/
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
DefineRelation(createStmt, RELKIND_COMPOSITE_TYPE, InvalidOid, &address,
|
|
|
|
NULL);
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
|
|
|
|
return address;
|
2002-08-15 18:36:08 +02:00
|
|
|
}
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* AlterDomainDefault
|
|
|
|
*
|
2003-08-04 02:43:34 +02:00
|
|
|
* Routine implementing ALTER DOMAIN SET/DROP DEFAULT statements.
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
*
|
|
|
|
* Returns ObjectAddress of the modified domain.
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2002-12-06 06:00:34 +01:00
|
|
|
AlterDomainDefault(List *names, Node *defaultRaw)
|
|
|
|
{
|
|
|
|
TypeName *typename;
|
|
|
|
Oid domainoid;
|
|
|
|
HeapTuple tup;
|
|
|
|
ParseState *pstate;
|
|
|
|
Relation rel;
|
|
|
|
char *defaultValue;
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
Node *defaultExpr = NULL; /* NULL if no default specified */
|
2002-12-06 06:00:34 +01:00
|
|
|
Datum new_record[Natts_pg_type];
|
2008-11-02 02:45:28 +01:00
|
|
|
bool new_record_nulls[Natts_pg_type];
|
|
|
|
bool new_record_repl[Natts_pg_type];
|
2002-12-06 06:00:34 +01:00
|
|
|
HeapTuple newtuple;
|
2003-08-04 02:43:34 +02:00
|
|
|
Form_pg_type typTup;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/* Make a TypeName so we can use standard type lookup machinery */
|
2006-03-14 23:48:25 +01:00
|
|
|
typename = makeTypeNameFromNameList(names);
|
2010-10-25 20:40:46 +02:00
|
|
|
domainoid = typenameTypeId(NULL, typename);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2006-03-14 23:48:25 +01:00
|
|
|
/* Look up the domain in the type table */
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(TypeRelationId, RowExclusiveLock);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCacheCopy1(TYPEOID, ObjectIdGetDatum(domainoid));
|
2002-12-06 06:00:34 +01:00
|
|
|
if (!HeapTupleIsValid(tup))
|
2003-07-20 23:56:35 +02:00
|
|
|
elog(ERROR, "cache lookup failed for type %u", domainoid);
|
2006-03-14 23:48:25 +01:00
|
|
|
typTup = (Form_pg_type) GETSTRUCT(tup);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2006-03-14 23:48:25 +01:00
|
|
|
/* Check it's a domain and check user has permission for ALTER DOMAIN */
|
2010-10-25 05:04:37 +02:00
|
|
|
checkDomainOwner(tup);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/* Setup new tuple */
|
|
|
|
MemSet(new_record, (Datum) 0, sizeof(new_record));
|
2008-11-02 02:45:28 +01:00
|
|
|
MemSet(new_record_nulls, false, sizeof(new_record_nulls));
|
|
|
|
MemSet(new_record_repl, false, sizeof(new_record_repl));
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2007-10-29 20:40:40 +01:00
|
|
|
/* Store the new default into the tuple */
|
2002-12-06 06:00:34 +01:00
|
|
|
if (defaultRaw)
|
|
|
|
{
|
|
|
|
/* Create a dummy ParseState for transformExpr */
|
|
|
|
pstate = make_parsestate(NULL);
|
2003-08-04 02:43:34 +02:00
|
|
|
|
2002-12-06 06:00:34 +01:00
|
|
|
/*
|
2003-08-04 02:43:34 +02:00
|
|
|
* Cook the colDef->raw_expr into an expression. Note: Name is
|
|
|
|
* strictly for error message
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
|
|
|
defaultExpr = cookDefault(pstate, defaultRaw,
|
|
|
|
typTup->typbasetype,
|
|
|
|
typTup->typtypmod,
|
2019-03-30 08:13:09 +01:00
|
|
|
NameStr(typTup->typname),
|
|
|
|
0);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/*
|
2007-10-29 20:40:40 +01:00
|
|
|
* If the expression is just a NULL constant, we treat the command
|
|
|
|
* like ALTER ... DROP DEFAULT. (But see note for same test in
|
|
|
|
* DefineDomain.)
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
2007-10-29 20:40:40 +01:00
|
|
|
if (defaultExpr == NULL ||
|
2020-05-16 17:54:51 +02:00
|
|
|
(IsA(defaultExpr, Const) && ((Const *) defaultExpr)->constisnull))
|
2007-10-29 20:40:40 +01:00
|
|
|
{
|
|
|
|
/* Default is NULL, drop it */
|
2020-03-06 18:19:29 +01:00
|
|
|
defaultExpr = NULL;
|
2008-11-02 02:45:28 +01:00
|
|
|
new_record_nulls[Anum_pg_type_typdefaultbin - 1] = true;
|
|
|
|
new_record_repl[Anum_pg_type_typdefaultbin - 1] = true;
|
|
|
|
new_record_nulls[Anum_pg_type_typdefault - 1] = true;
|
|
|
|
new_record_repl[Anum_pg_type_typdefault - 1] = true;
|
2007-10-29 20:40:40 +01:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Expression must be stored as a nodeToString result, but we also
|
|
|
|
* require a valid textual representation (mainly to make life
|
|
|
|
* easier for pg_dump).
|
|
|
|
*/
|
|
|
|
defaultValue = deparse_expression(defaultExpr,
|
2012-12-31 21:13:26 +01:00
|
|
|
NIL, false, false);
|
2003-08-04 02:43:34 +02:00
|
|
|
|
2007-10-29 20:40:40 +01:00
|
|
|
/*
|
|
|
|
* Form an updated tuple with the new default and write it back.
|
|
|
|
*/
|
2008-03-25 23:42:46 +01:00
|
|
|
new_record[Anum_pg_type_typdefaultbin - 1] = CStringGetTextDatum(nodeToString(defaultExpr));
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2008-11-02 02:45:28 +01:00
|
|
|
new_record_repl[Anum_pg_type_typdefaultbin - 1] = true;
|
2008-03-25 23:42:46 +01:00
|
|
|
new_record[Anum_pg_type_typdefault - 1] = CStringGetTextDatum(defaultValue);
|
2008-11-02 02:45:28 +01:00
|
|
|
new_record_repl[Anum_pg_type_typdefault - 1] = true;
|
2007-10-29 20:40:40 +01:00
|
|
|
}
|
2002-12-06 06:00:34 +01:00
|
|
|
}
|
2003-08-04 02:43:34 +02:00
|
|
|
else
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
2007-10-29 20:40:40 +01:00
|
|
|
/* ALTER ... DROP DEFAULT */
|
2008-11-02 02:45:28 +01:00
|
|
|
new_record_nulls[Anum_pg_type_typdefaultbin - 1] = true;
|
|
|
|
new_record_repl[Anum_pg_type_typdefaultbin - 1] = true;
|
|
|
|
new_record_nulls[Anum_pg_type_typdefault - 1] = true;
|
|
|
|
new_record_repl[Anum_pg_type_typdefault - 1] = true;
|
2002-12-06 06:00:34 +01:00
|
|
|
}
|
|
|
|
|
2008-11-02 02:45:28 +01:00
|
|
|
newtuple = heap_modify_tuple(tup, RelationGetDescr(rel),
|
2009-06-11 16:49:15 +02:00
|
|
|
new_record, new_record_nulls,
|
|
|
|
new_record_repl);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(rel, &tup->t_self, newtuple);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/* Rebuild dependencies */
|
2020-03-06 18:19:29 +01:00
|
|
|
GenerateTypeDependencies(newtuple,
|
|
|
|
rel,
|
Fix missing role dependencies for some schema and type ACLs.
This patch fixes several related cases in which pg_shdepend entries were
never made, or were lost, for references to roles appearing in the ACLs of
schemas and/or types. While that did no immediate harm, if a referenced
role were later dropped, the drop would be allowed and would leave a
dangling reference in the object's ACL. That still wasn't a big problem
for normal database usage, but it would cause obscure failures in
subsequent dump/reload or pg_upgrade attempts, taking the form of
attempts to grant privileges to all-numeric role names. (I think I've
seen field reports matching that symptom, but can't find any right now.)
Several cases are fixed here:
1. ALTER DOMAIN SET/DROP DEFAULT would lose the dependencies for any
existing ACL entries for the domain. This case is ancient, dating
back as far as we've had pg_shdepend tracking at all.
2. If a default type privilege applies, CREATE TYPE recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for types in 9.2.
3. If a default schema privilege applies, CREATE SCHEMA recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for schemas in v10
(commit ab89e465c).
Another somewhat-related problem is that when creating a relation
rowtype or implicit array type, TypeCreate would apply any available
default type privileges to that type, which we don't really want
since such an object isn't supposed to have privileges of its own.
(You can't, for example, drop such privileges once they've been added
to an array type.)
ab89e465c is also to blame for a race condition in the regression tests:
privileges.sql transiently installed globally-applicable default
privileges on schemas, which sometimes got absorbed into the ACLs of
schemas created by concurrent test scripts. This should have resulted
in failures when privileges.sql tried to drop the role holding such
privileges; but thanks to the bug fixed here, it instead led to dangling
ACLs in the final state of the regression database. We'd managed not to
notice that, but it became obvious in the wake of commit da906766c, which
allowed the race condition to occur in pg_upgrade tests.
To fix, add a function recordDependencyOnNewAcl to encapsulate what
callers of get_user_default_acl need to do; while the original call
sites got that right via ad-hoc code, none of the later-added ones
have. Also change GenerateTypeDependencies to generate these
dependencies, which requires adding the typacl to its parameter list.
(That might be annoying if there are any extensions calling that
function directly; but if there are, they're most likely buggy in the
same way as the core callers were, so they need work anyway.) While
I was at it, I changed GenerateTypeDependencies to accept most of its
parameters in the form of a Form_pg_type pointer, making its parameter
list a bit less unwieldy and mistake-prone.
The test race condition is fixed just by wrapping the addition and
removal of default privileges into a single transaction, so that that
state is never visible externally. We might eventually prefer to
separate out tests of default privileges into a script that runs by
itself, but that would be a bigger change and would make the tests
run slower overall.
Back-patch relevant parts to all supported branches.
Discussion: https://postgr.es/m/15719.1541725287@sss.pgh.pa.us
2018-11-10 02:42:03 +01:00
|
|
|
defaultExpr,
|
2020-03-06 18:19:29 +01:00
|
|
|
NULL, /* don't have typacl handy */
|
2003-08-04 02:43:34 +02:00
|
|
|
0, /* relation kind is n/a */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
false, /* a domain isn't an implicit array */
|
Fix missing role dependencies for some schema and type ACLs.
This patch fixes several related cases in which pg_shdepend entries were
never made, or were lost, for references to roles appearing in the ACLs of
schemas and/or types. While that did no immediate harm, if a referenced
role were later dropped, the drop would be allowed and would leave a
dangling reference in the object's ACL. That still wasn't a big problem
for normal database usage, but it would cause obscure failures in
subsequent dump/reload or pg_upgrade attempts, taking the form of
attempts to grant privileges to all-numeric role names. (I think I've
seen field reports matching that symptom, but can't find any right now.)
Several cases are fixed here:
1. ALTER DOMAIN SET/DROP DEFAULT would lose the dependencies for any
existing ACL entries for the domain. This case is ancient, dating
back as far as we've had pg_shdepend tracking at all.
2. If a default type privilege applies, CREATE TYPE recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for types in 9.2.
3. If a default schema privilege applies, CREATE SCHEMA recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for schemas in v10
(commit ab89e465c).
Another somewhat-related problem is that when creating a relation
rowtype or implicit array type, TypeCreate would apply any available
default type privileges to that type, which we don't really want
since such an object isn't supposed to have privileges of its own.
(You can't, for example, drop such privileges once they've been added
to an array type.)
ab89e465c is also to blame for a race condition in the regression tests:
privileges.sql transiently installed globally-applicable default
privileges on schemas, which sometimes got absorbed into the ACLs of
schemas created by concurrent test scripts. This should have resulted
in failures when privileges.sql tried to drop the role holding such
privileges; but thanks to the bug fixed here, it instead led to dangling
ACLs in the final state of the regression database. We'd managed not to
notice that, but it became obvious in the wake of commit da906766c, which
allowed the race condition to occur in pg_upgrade tests.
To fix, add a function recordDependencyOnNewAcl to encapsulate what
callers of get_user_default_acl need to do; while the original call
sites got that right via ad-hoc code, none of the later-added ones
have. Also change GenerateTypeDependencies to generate these
dependencies, which requires adding the typacl to its parameter list.
(That might be annoying if there are any extensions calling that
function directly; but if there are, they're most likely buggy in the
same way as the core callers were, so they need work anyway.) While
I was at it, I changed GenerateTypeDependencies to accept most of its
parameters in the form of a Form_pg_type pointer, making its parameter
list a bit less unwieldy and mistake-prone.
The test race condition is fixed just by wrapping the addition and
removal of default privileges into a single transaction, so that that
state is never visible externally. We might eventually prefer to
separate out tests of default privileges into a script that runs by
itself, but that would be a bigger change and would make the tests
run slower overall.
Back-patch relevant parts to all supported branches.
Discussion: https://postgr.es/m/15719.1541725287@sss.pgh.pa.us
2018-11-10 02:42:03 +01:00
|
|
|
false, /* nor is it any kind of dependent type */
|
|
|
|
true); /* We do need to rebuild dependencies */
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2013-03-18 03:55:14 +01:00
|
|
|
InvokeObjectPostAlterHook(TypeRelationId, domainoid, 0);
|
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddressSet(address, TypeRelationId, domainoid);
|
|
|
|
|
2002-12-06 06:00:34 +01:00
|
|
|
/* Clean up */
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, RowExclusiveLock);
|
2002-12-06 06:00:34 +01:00
|
|
|
heap_freetuple(newtuple);
|
2012-12-29 13:55:37 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2003-04-08 18:57:45 +02:00
|
|
|
}
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* AlterDomainNotNull
|
|
|
|
*
|
2003-08-04 02:43:34 +02:00
|
|
|
* Routine implementing ALTER DOMAIN SET/DROP NOT NULL statements.
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
*
|
|
|
|
* Returns ObjectAddress of the modified domain.
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2002-12-06 06:00:34 +01:00
|
|
|
AlterDomainNotNull(List *names, bool notNull)
|
|
|
|
{
|
|
|
|
TypeName *typename;
|
|
|
|
Oid domainoid;
|
2003-01-04 01:46:08 +01:00
|
|
|
Relation typrel;
|
2002-12-06 06:00:34 +01:00
|
|
|
HeapTuple tup;
|
2003-08-04 02:43:34 +02:00
|
|
|
Form_pg_type typTup;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address = InvalidObjectAddress;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/* Make a TypeName so we can use standard type lookup machinery */
|
2006-03-14 23:48:25 +01:00
|
|
|
typename = makeTypeNameFromNameList(names);
|
2010-10-25 20:40:46 +02:00
|
|
|
domainoid = typenameTypeId(NULL, typename);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2006-03-14 23:48:25 +01:00
|
|
|
/* Look up the domain in the type table */
|
2019-01-21 19:32:19 +01:00
|
|
|
typrel = table_open(TypeRelationId, RowExclusiveLock);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCacheCopy1(TYPEOID, ObjectIdGetDatum(domainoid));
|
2002-12-06 06:00:34 +01:00
|
|
|
if (!HeapTupleIsValid(tup))
|
2003-07-20 23:56:35 +02:00
|
|
|
elog(ERROR, "cache lookup failed for type %u", domainoid);
|
2003-01-04 01:46:08 +01:00
|
|
|
typTup = (Form_pg_type) GETSTRUCT(tup);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2006-03-14 23:48:25 +01:00
|
|
|
/* Check it's a domain and check user has permission for ALTER DOMAIN */
|
2010-10-25 05:04:37 +02:00
|
|
|
checkDomainOwner(tup);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
/* Is the domain already set to the desired constraint? */
|
2002-12-06 06:00:34 +01:00
|
|
|
if (typTup->typnotnull == notNull)
|
2003-01-04 01:46:08 +01:00
|
|
|
{
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(typrel, RowExclusiveLock);
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2003-01-04 01:46:08 +01:00
|
|
|
}
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
/* Adding a NOT NULL constraint requires checking existing columns */
|
2002-12-06 06:00:34 +01:00
|
|
|
if (notNull)
|
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
List *rels;
|
2004-05-26 06:41:50 +02:00
|
|
|
ListCell *rt;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/* Fetch relation list with attributes based on this domain */
|
2003-01-04 01:46:08 +01:00
|
|
|
/* ShareLock is sufficient to prevent concurrent data changes */
|
|
|
|
|
|
|
|
rels = get_rels_with_domain(domainoid, ShareLock);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-08-04 02:43:34 +02:00
|
|
|
foreach(rt, rels)
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
2003-01-04 01:46:08 +01:00
|
|
|
RelToCheck *rtc = (RelToCheck *) lfirst(rt);
|
|
|
|
Relation testrel = rtc->rel;
|
|
|
|
TupleDesc tupdesc = RelationGetDescr(testrel);
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
TupleTableSlot *slot;
|
|
|
|
TableScanDesc scan;
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
Snapshot snapshot;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
/* Scan all tuples in this relation */
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
snapshot = RegisterSnapshot(GetLatestSnapshot());
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
scan = table_beginscan(testrel, snapshot, 0, NULL);
|
|
|
|
slot = table_slot_create(testrel, NULL);
|
|
|
|
while (table_scan_getnextslot(scan, ForwardScanDirection, slot))
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
int i;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
/* Test attributes that are of the domain */
|
2002-12-06 06:00:34 +01:00
|
|
|
for (i = 0; i < rtc->natts; i++)
|
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
int attnum = rtc->atts[i];
|
2017-08-20 20:19:07 +02:00
|
|
|
Form_pg_attribute attr = TupleDescAttr(tupdesc, attnum - 1);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
if (slot_attisnull(slot, attnum))
|
Provide database object names as separate fields in error messages.
This patch addresses the problem that applications currently have to
extract object names from possibly-localized textual error messages,
if they want to know for example which index caused a UNIQUE_VIOLATION
failure. It adds new error message fields to the wire protocol, which
can carry the name of a table, table column, data type, or constraint
associated with the error. (Since the protocol spec has always instructed
clients to ignore unrecognized field types, this should not create any
compatibility problem.)
Support for providing these new fields has been added to just a limited set
of error reports (mainly, those in the "integrity constraint violation"
SQLSTATE class), but we will doubtless add them to more calls in future.
Pavel Stehule, reviewed and extensively revised by Peter Geoghegan, with
additional hacking by Tom Lane.
2013-01-29 23:06:26 +01:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* In principle the auxiliary information for this
|
|
|
|
* error should be errdatatype(), but errtablecol()
|
2014-05-06 18:12:18 +02:00
|
|
|
* seems considerably more useful in practice. Since
|
Provide database object names as separate fields in error messages.
This patch addresses the problem that applications currently have to
extract object names from possibly-localized textual error messages,
if they want to know for example which index caused a UNIQUE_VIOLATION
failure. It adds new error message fields to the wire protocol, which
can carry the name of a table, table column, data type, or constraint
associated with the error. (Since the protocol spec has always instructed
clients to ignore unrecognized field types, this should not create any
compatibility problem.)
Support for providing these new fields has been added to just a limited set
of error reports (mainly, those in the "integrity constraint violation"
SQLSTATE class), but we will doubtless add them to more calls in future.
Pavel Stehule, reviewed and extensively revised by Peter Geoghegan, with
additional hacking by Tom Lane.
2013-01-29 23:06:26 +01:00
|
|
|
* this code only executes in an ALTER DOMAIN command,
|
|
|
|
* the client should already know which domain is in
|
|
|
|
* question.
|
|
|
|
*/
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_NOT_NULL_VIOLATION),
|
2003-09-25 08:58:07 +02:00
|
|
|
errmsg("column \"%s\" of table \"%s\" contains null values",
|
2017-08-20 20:19:07 +02:00
|
|
|
NameStr(attr->attname),
|
Provide database object names as separate fields in error messages.
This patch addresses the problem that applications currently have to
extract object names from possibly-localized textual error messages,
if they want to know for example which index caused a UNIQUE_VIOLATION
failure. It adds new error message fields to the wire protocol, which
can carry the name of a table, table column, data type, or constraint
associated with the error. (Since the protocol spec has always instructed
clients to ignore unrecognized field types, this should not create any
compatibility problem.)
Support for providing these new fields has been added to just a limited set
of error reports (mainly, those in the "integrity constraint violation"
SQLSTATE class), but we will doubtless add them to more calls in future.
Pavel Stehule, reviewed and extensively revised by Peter Geoghegan, with
additional hacking by Tom Lane.
2013-01-29 23:06:26 +01:00
|
|
|
RelationGetRelationName(testrel)),
|
|
|
|
errtablecol(testrel, attnum)));
|
|
|
|
}
|
2002-12-06 06:00:34 +01:00
|
|
|
}
|
|
|
|
}
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
ExecDropSingleTupleTableSlot(slot);
|
|
|
|
table_endscan(scan);
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
UnregisterSnapshot(snapshot);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
/* Close each rel after processing, but keep lock */
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(testrel, NoLock);
|
2002-12-06 06:00:34 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
/*
|
2014-05-06 18:12:18 +02:00
|
|
|
* Okay to update pg_type row. We can scribble on typTup because it's a
|
2005-10-15 04:49:52 +02:00
|
|
|
* copy.
|
2003-01-04 01:46:08 +01:00
|
|
|
*/
|
|
|
|
typTup->typnotnull = notNull;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(typrel, &tup->t_self, tup);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2013-03-18 03:55:14 +01:00
|
|
|
InvokeObjectPostAlterHook(TypeRelationId, domainoid, 0);
|
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddressSet(address, TypeRelationId, domainoid);
|
|
|
|
|
2002-12-06 06:00:34 +01:00
|
|
|
/* Clean up */
|
2003-01-04 01:46:08 +01:00
|
|
|
heap_freetuple(tup);
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(typrel, RowExclusiveLock);
|
2012-12-29 13:55:37 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2002-12-06 06:00:34 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* AlterDomainDropConstraint
|
|
|
|
*
|
|
|
|
* Implements the ALTER DOMAIN DROP CONSTRAINT statement
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
*
|
|
|
|
* Returns ObjectAddress of the modified domain.
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2006-03-14 23:48:25 +01:00
|
|
|
AlterDomainDropConstraint(List *names, const char *constrName,
|
2012-01-05 18:48:55 +01:00
|
|
|
DropBehavior behavior, bool missing_ok)
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
|
|
|
TypeName *typename;
|
|
|
|
Oid domainoid;
|
|
|
|
HeapTuple tup;
|
|
|
|
Relation rel;
|
|
|
|
Relation conrel;
|
|
|
|
SysScanDesc conscan;
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
ScanKeyData skey[3];
|
2002-12-06 06:00:34 +01:00
|
|
|
HeapTuple contup;
|
2012-01-05 18:48:55 +01:00
|
|
|
bool found = false;
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
ObjectAddress address;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/* Make a TypeName so we can use standard type lookup machinery */
|
2006-03-14 23:48:25 +01:00
|
|
|
typename = makeTypeNameFromNameList(names);
|
2010-10-25 20:40:46 +02:00
|
|
|
domainoid = typenameTypeId(NULL, typename);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2006-03-14 23:48:25 +01:00
|
|
|
/* Look up the domain in the type table */
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(TypeRelationId, RowExclusiveLock);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCacheCopy1(TYPEOID, ObjectIdGetDatum(domainoid));
|
2002-12-06 06:00:34 +01:00
|
|
|
if (!HeapTupleIsValid(tup))
|
2003-07-20 23:56:35 +02:00
|
|
|
elog(ERROR, "cache lookup failed for type %u", domainoid);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2006-03-14 23:48:25 +01:00
|
|
|
/* Check it's a domain and check user has permission for ALTER DOMAIN */
|
2010-10-25 05:04:37 +02:00
|
|
|
checkDomainOwner(tup);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/* Grab an appropriate lock on the pg_constraint relation */
|
2019-01-21 19:32:19 +01:00
|
|
|
conrel = table_open(ConstraintRelationId, RowExclusiveLock);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
/* Find and remove the target constraint */
|
|
|
|
ScanKeyInit(&skey[0],
|
|
|
|
Anum_pg_constraint_conrelid,
|
|
|
|
BTEqualStrategyNumber, F_OIDEQ,
|
|
|
|
ObjectIdGetDatum(InvalidOid));
|
|
|
|
ScanKeyInit(&skey[1],
|
2003-11-12 22:15:59 +01:00
|
|
|
Anum_pg_constraint_contypid,
|
|
|
|
BTEqualStrategyNumber, F_OIDEQ,
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
ObjectIdGetDatum(domainoid));
|
|
|
|
ScanKeyInit(&skey[2],
|
|
|
|
Anum_pg_constraint_conname,
|
|
|
|
BTEqualStrategyNumber, F_NAMEEQ,
|
|
|
|
CStringGetDatum(constrName));
|
2002-12-06 06:00:34 +01:00
|
|
|
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
conscan = systable_beginscan(conrel, ConstraintRelidTypidNameIndexId, true,
|
|
|
|
NULL, 3, skey);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
/* There can be at most one matching row */
|
|
|
|
if ((contup = systable_getnext(conscan)) != NULL)
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
ObjectAddress conobj;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
conobj.classId = ConstraintRelationId;
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
conobj.objectId = ((Form_pg_constraint) GETSTRUCT(contup))->oid;
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
conobj.objectSubId = 0;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
performDeletion(&conobj, behavior, 0);
|
|
|
|
found = true;
|
2002-12-06 06:00:34 +01:00
|
|
|
}
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
|
2002-12-06 06:00:34 +01:00
|
|
|
/* Clean up after the scan */
|
|
|
|
systable_endscan(conscan);
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(conrel, RowExclusiveLock);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2012-01-05 18:48:55 +01:00
|
|
|
if (!found)
|
|
|
|
{
|
|
|
|
if (!missing_ok)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("constraint \"%s\" of domain \"%s\" does not exist",
|
|
|
|
constrName, TypeNameToString(typename))));
|
2012-01-05 18:48:55 +01:00
|
|
|
else
|
|
|
|
ereport(NOTICE,
|
|
|
|
(errmsg("constraint \"%s\" of domain \"%s\" does not exist, skipping",
|
|
|
|
constrName, TypeNameToString(typename))));
|
|
|
|
}
|
2012-12-29 13:55:37 +01:00
|
|
|
|
Drop no-op CoerceToDomain nodes from expressions at planning time.
If a domain has no constraints, then CoerceToDomain doesn't really do
anything and can be simplified to a RelabelType. This not only
eliminates cycles at execution, but allows the planner to optimize better
(for instance, match the coerced expression to an index on the underlying
column). However, we do have to support invalidating the plan later if
a constraint gets added to the domain. That's comparable to the case of
a change to a SQL function that had been inlined into a plan, so all the
necessary logic already exists for plans depending on functions. We
need only duplicate or share that logic for domains.
ALTER DOMAIN ADD/DROP CONSTRAINT need to be taught to send out sinval
messages for the domain's pg_type entry, since those operations don't
update that row. (ALTER DOMAIN SET/DROP NOT NULL do update that row,
so no code change is needed for them.)
Testing this revealed what's really a pre-existing bug in plpgsql:
it caches the SQL-expression-tree expansion of type coercions and
had no provision for invalidating entries in that cache. Up to now
that was only a problem if such an expression had inlined a SQL
function that got changed, which is unlikely though not impossible.
But failing to track changes of domain constraints breaks an existing
regression test case and would likely cause practical problems too.
We could fix that locally in plpgsql, but what seems like a better
idea is to build some generic infrastructure in plancache.c to store
standalone expressions and track invalidation events for them.
(It's tempting to wonder whether plpgsql's "simple expression" stuff
could use this code with lower overhead than its current use of the
heavyweight plancache APIs. But I've left that idea for later.)
Other stuff fixed in passing:
* Allow estimate_expression_value() to drop CoerceToDomain
unconditionally, effectively assuming that the coercion will succeed.
This will improve planner selectivity estimates for cases involving
estimatable expressions that are coerced to domains. We could have
done this independently of everything else here, but there wasn't
previously any need for eval_const_expressions_mutator to know about
CoerceToDomain at all.
* Use a dlist for plancache.c's list of cached plans, rather than a
manually threaded singly-linked list. That eliminates a potential
performance problem in DropCachedPlan.
* Fix a couple of inconsistencies in typecmds.c about whether
operations on domains drop RowExclusiveLock on pg_type. Our common
practice is that DDL operations do drop catalog locks, so standardize
on that choice.
Discussion: https://postgr.es/m/19958.1544122124@sss.pgh.pa.us
2018-12-13 19:24:43 +01:00
|
|
|
/*
|
|
|
|
* We must send out an sinval message for the domain, to ensure that any
|
|
|
|
* dependent plans get rebuilt. Since this command doesn't change the
|
|
|
|
* domain's pg_type row, that won't happen automatically; do it manually.
|
|
|
|
*/
|
|
|
|
CacheInvalidateHeapTuple(rel, tup, NULL);
|
|
|
|
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
ObjectAddressSet(address, TypeRelationId, domainoid);
|
|
|
|
|
Drop no-op CoerceToDomain nodes from expressions at planning time.
If a domain has no constraints, then CoerceToDomain doesn't really do
anything and can be simplified to a RelabelType. This not only
eliminates cycles at execution, but allows the planner to optimize better
(for instance, match the coerced expression to an index on the underlying
column). However, we do have to support invalidating the plan later if
a constraint gets added to the domain. That's comparable to the case of
a change to a SQL function that had been inlined into a plan, so all the
necessary logic already exists for plans depending on functions. We
need only duplicate or share that logic for domains.
ALTER DOMAIN ADD/DROP CONSTRAINT need to be taught to send out sinval
messages for the domain's pg_type entry, since those operations don't
update that row. (ALTER DOMAIN SET/DROP NOT NULL do update that row,
so no code change is needed for them.)
Testing this revealed what's really a pre-existing bug in plpgsql:
it caches the SQL-expression-tree expansion of type coercions and
had no provision for invalidating entries in that cache. Up to now
that was only a problem if such an expression had inlined a SQL
function that got changed, which is unlikely though not impossible.
But failing to track changes of domain constraints breaks an existing
regression test case and would likely cause practical problems too.
We could fix that locally in plpgsql, but what seems like a better
idea is to build some generic infrastructure in plancache.c to store
standalone expressions and track invalidation events for them.
(It's tempting to wonder whether plpgsql's "simple expression" stuff
could use this code with lower overhead than its current use of the
heavyweight plancache APIs. But I've left that idea for later.)
Other stuff fixed in passing:
* Allow estimate_expression_value() to drop CoerceToDomain
unconditionally, effectively assuming that the coercion will succeed.
This will improve planner selectivity estimates for cases involving
estimatable expressions that are coerced to domains. We could have
done this independently of everything else here, but there wasn't
previously any need for eval_const_expressions_mutator to know about
CoerceToDomain at all.
* Use a dlist for plancache.c's list of cached plans, rather than a
manually threaded singly-linked list. That eliminates a potential
performance problem in DropCachedPlan.
* Fix a couple of inconsistencies in typecmds.c about whether
operations on domains drop RowExclusiveLock on pg_type. Our common
practice is that DDL operations do drop catalog locks, so standardize
on that choice.
Discussion: https://postgr.es/m/19958.1544122124@sss.pgh.pa.us
2018-12-13 19:24:43 +01:00
|
|
|
/* Clean up */
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, RowExclusiveLock);
|
Drop no-op CoerceToDomain nodes from expressions at planning time.
If a domain has no constraints, then CoerceToDomain doesn't really do
anything and can be simplified to a RelabelType. This not only
eliminates cycles at execution, but allows the planner to optimize better
(for instance, match the coerced expression to an index on the underlying
column). However, we do have to support invalidating the plan later if
a constraint gets added to the domain. That's comparable to the case of
a change to a SQL function that had been inlined into a plan, so all the
necessary logic already exists for plans depending on functions. We
need only duplicate or share that logic for domains.
ALTER DOMAIN ADD/DROP CONSTRAINT need to be taught to send out sinval
messages for the domain's pg_type entry, since those operations don't
update that row. (ALTER DOMAIN SET/DROP NOT NULL do update that row,
so no code change is needed for them.)
Testing this revealed what's really a pre-existing bug in plpgsql:
it caches the SQL-expression-tree expansion of type coercions and
had no provision for invalidating entries in that cache. Up to now
that was only a problem if such an expression had inlined a SQL
function that got changed, which is unlikely though not impossible.
But failing to track changes of domain constraints breaks an existing
regression test case and would likely cause practical problems too.
We could fix that locally in plpgsql, but what seems like a better
idea is to build some generic infrastructure in plancache.c to store
standalone expressions and track invalidation events for them.
(It's tempting to wonder whether plpgsql's "simple expression" stuff
could use this code with lower overhead than its current use of the
heavyweight plancache APIs. But I've left that idea for later.)
Other stuff fixed in passing:
* Allow estimate_expression_value() to drop CoerceToDomain
unconditionally, effectively assuming that the coercion will succeed.
This will improve planner selectivity estimates for cases involving
estimatable expressions that are coerced to domains. We could have
done this independently of everything else here, but there wasn't
previously any need for eval_const_expressions_mutator to know about
CoerceToDomain at all.
* Use a dlist for plancache.c's list of cached plans, rather than a
manually threaded singly-linked list. That eliminates a potential
performance problem in DropCachedPlan.
* Fix a couple of inconsistencies in typecmds.c about whether
operations on domains drop RowExclusiveLock on pg_type. Our common
practice is that DDL operations do drop catalog locks, so standardize
on that choice.
Discussion: https://postgr.es/m/19958.1544122124@sss.pgh.pa.us
2018-12-13 19:24:43 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2003-04-08 18:57:45 +02:00
|
|
|
}
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* AlterDomainAddConstraint
|
|
|
|
*
|
|
|
|
* Implements the ALTER DOMAIN .. ADD CONSTRAINT statement.
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
|
|
|
AlterDomainAddConstraint(List *names, Node *newConstraint,
|
|
|
|
ObjectAddress *constrAddr)
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
|
|
|
TypeName *typename;
|
|
|
|
Oid domainoid;
|
2003-01-04 01:46:08 +01:00
|
|
|
Relation typrel;
|
2002-12-06 06:00:34 +01:00
|
|
|
HeapTuple tup;
|
2003-08-04 02:43:34 +02:00
|
|
|
Form_pg_type typTup;
|
2002-12-06 06:00:34 +01:00
|
|
|
Constraint *constr;
|
2011-06-02 00:43:50 +02:00
|
|
|
char *ccbin;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/* Make a TypeName so we can use standard type lookup machinery */
|
2006-03-14 23:48:25 +01:00
|
|
|
typename = makeTypeNameFromNameList(names);
|
2010-10-25 20:40:46 +02:00
|
|
|
domainoid = typenameTypeId(NULL, typename);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2006-03-14 23:48:25 +01:00
|
|
|
/* Look up the domain in the type table */
|
2019-01-21 19:32:19 +01:00
|
|
|
typrel = table_open(TypeRelationId, RowExclusiveLock);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCacheCopy1(TYPEOID, ObjectIdGetDatum(domainoid));
|
2002-12-06 06:00:34 +01:00
|
|
|
if (!HeapTupleIsValid(tup))
|
2003-07-20 23:56:35 +02:00
|
|
|
elog(ERROR, "cache lookup failed for type %u", domainoid);
|
2002-12-09 21:31:05 +01:00
|
|
|
typTup = (Form_pg_type) GETSTRUCT(tup);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2006-03-14 23:48:25 +01:00
|
|
|
/* Check it's a domain and check user has permission for ALTER DOMAIN */
|
2010-10-25 05:04:37 +02:00
|
|
|
checkDomainOwner(tup);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2002-12-09 21:31:05 +01:00
|
|
|
if (!IsA(newConstraint, Constraint))
|
2003-07-20 23:56:35 +02:00
|
|
|
elog(ERROR, "unrecognized node type: %d",
|
|
|
|
(int) nodeTag(newConstraint));
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
constr = (Constraint *) newConstraint;
|
2002-12-09 21:31:05 +01:00
|
|
|
|
2002-12-06 06:00:34 +01:00
|
|
|
switch (constr->contype)
|
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
case CONSTR_CHECK:
|
2002-12-09 21:31:05 +01:00
|
|
|
/* processed below */
|
2003-08-04 02:43:34 +02:00
|
|
|
break;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2002-12-09 21:31:05 +01:00
|
|
|
case CONSTR_UNIQUE:
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
2003-09-10 01:22:21 +02:00
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2005-10-15 04:49:52 +02:00
|
|
|
errmsg("unique constraints not possible for domains")));
|
2002-12-09 21:31:05 +01:00
|
|
|
break;
|
|
|
|
|
|
|
|
case CONSTR_PRIMARY:
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
2003-09-10 01:22:21 +02:00
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("primary key constraints not possible for domains")));
|
2002-12-09 21:31:05 +01:00
|
|
|
break;
|
|
|
|
|
2009-12-07 06:22:23 +01:00
|
|
|
case CONSTR_EXCLUSION:
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("exclusion constraints not possible for domains")));
|
2009-12-07 06:22:23 +01:00
|
|
|
break;
|
|
|
|
|
2009-07-30 04:45:38 +02:00
|
|
|
case CONSTR_FOREIGN:
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("foreign key constraints not possible for domains")));
|
2009-07-30 04:45:38 +02:00
|
|
|
break;
|
|
|
|
|
2002-12-09 21:31:05 +01:00
|
|
|
case CONSTR_ATTR_DEFERRABLE:
|
|
|
|
case CONSTR_ATTR_NOT_DEFERRABLE:
|
|
|
|
case CONSTR_ATTR_DEFERRED:
|
|
|
|
case CONSTR_ATTR_IMMEDIATE:
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
2003-09-10 01:22:21 +02:00
|
|
|
errmsg("specifying constraint deferrability not supported for domains")));
|
2002-12-09 21:31:05 +01:00
|
|
|
break;
|
|
|
|
|
2002-12-06 06:00:34 +01:00
|
|
|
default:
|
2003-07-20 23:56:35 +02:00
|
|
|
elog(ERROR, "unrecognized constraint subtype: %d",
|
|
|
|
(int) constr->contype);
|
2002-12-06 06:00:34 +01:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2003-08-04 02:43:34 +02:00
|
|
|
* Since all other constraint types throw errors, this must be a check
|
2014-05-06 18:12:18 +02:00
|
|
|
* constraint. First, process the constraint expression and add an entry
|
2005-10-15 04:49:52 +02:00
|
|
|
* to pg_constraint.
|
2002-12-09 21:31:05 +01:00
|
|
|
*/
|
|
|
|
|
Provide database object names as separate fields in error messages.
This patch addresses the problem that applications currently have to
extract object names from possibly-localized textual error messages,
if they want to know for example which index caused a UNIQUE_VIOLATION
failure. It adds new error message fields to the wire protocol, which
can carry the name of a table, table column, data type, or constraint
associated with the error. (Since the protocol spec has always instructed
clients to ignore unrecognized field types, this should not create any
compatibility problem.)
Support for providing these new fields has been added to just a limited set
of error reports (mainly, those in the "integrity constraint violation"
SQLSTATE class), but we will doubtless add them to more calls in future.
Pavel Stehule, reviewed and extensively revised by Peter Geoghegan, with
additional hacking by Tom Lane.
2013-01-29 23:06:26 +01:00
|
|
|
ccbin = domainAddConstraint(domainoid, typTup->typnamespace,
|
2002-12-09 21:31:05 +01:00
|
|
|
typTup->typbasetype, typTup->typtypmod,
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
constr, NameStr(typTup->typname), constrAddr);
|
2002-12-09 21:31:05 +01:00
|
|
|
|
|
|
|
/*
|
2011-06-02 00:43:50 +02:00
|
|
|
* If requested to validate the constraint, test all values stored in the
|
|
|
|
* attributes based on the domain the constraint is being added to.
|
|
|
|
*/
|
|
|
|
if (!constr->skip_validation)
|
|
|
|
validateDomainConstraint(domainoid, ccbin);
|
|
|
|
|
Drop no-op CoerceToDomain nodes from expressions at planning time.
If a domain has no constraints, then CoerceToDomain doesn't really do
anything and can be simplified to a RelabelType. This not only
eliminates cycles at execution, but allows the planner to optimize better
(for instance, match the coerced expression to an index on the underlying
column). However, we do have to support invalidating the plan later if
a constraint gets added to the domain. That's comparable to the case of
a change to a SQL function that had been inlined into a plan, so all the
necessary logic already exists for plans depending on functions. We
need only duplicate or share that logic for domains.
ALTER DOMAIN ADD/DROP CONSTRAINT need to be taught to send out sinval
messages for the domain's pg_type entry, since those operations don't
update that row. (ALTER DOMAIN SET/DROP NOT NULL do update that row,
so no code change is needed for them.)
Testing this revealed what's really a pre-existing bug in plpgsql:
it caches the SQL-expression-tree expansion of type coercions and
had no provision for invalidating entries in that cache. Up to now
that was only a problem if such an expression had inlined a SQL
function that got changed, which is unlikely though not impossible.
But failing to track changes of domain constraints breaks an existing
regression test case and would likely cause practical problems too.
We could fix that locally in plpgsql, but what seems like a better
idea is to build some generic infrastructure in plancache.c to store
standalone expressions and track invalidation events for them.
(It's tempting to wonder whether plpgsql's "simple expression" stuff
could use this code with lower overhead than its current use of the
heavyweight plancache APIs. But I've left that idea for later.)
Other stuff fixed in passing:
* Allow estimate_expression_value() to drop CoerceToDomain
unconditionally, effectively assuming that the coercion will succeed.
This will improve planner selectivity estimates for cases involving
estimatable expressions that are coerced to domains. We could have
done this independently of everything else here, but there wasn't
previously any need for eval_const_expressions_mutator to know about
CoerceToDomain at all.
* Use a dlist for plancache.c's list of cached plans, rather than a
manually threaded singly-linked list. That eliminates a potential
performance problem in DropCachedPlan.
* Fix a couple of inconsistencies in typecmds.c about whether
operations on domains drop RowExclusiveLock on pg_type. Our common
practice is that DDL operations do drop catalog locks, so standardize
on that choice.
Discussion: https://postgr.es/m/19958.1544122124@sss.pgh.pa.us
2018-12-13 19:24:43 +01:00
|
|
|
/*
|
|
|
|
* We must send out an sinval message for the domain, to ensure that any
|
|
|
|
* dependent plans get rebuilt. Since this command doesn't change the
|
|
|
|
* domain's pg_type row, that won't happen automatically; do it manually.
|
|
|
|
*/
|
|
|
|
CacheInvalidateHeapTuple(typrel, tup, NULL);
|
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddressSet(address, TypeRelationId, domainoid);
|
|
|
|
|
2011-06-02 00:43:50 +02:00
|
|
|
/* Clean up */
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(typrel, RowExclusiveLock);
|
2012-12-29 13:55:37 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2011-06-02 00:43:50 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* AlterDomainValidateConstraint
|
|
|
|
*
|
|
|
|
* Implements the ALTER DOMAIN .. VALIDATE CONSTRAINT statement.
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2017-10-31 15:34:31 +01:00
|
|
|
AlterDomainValidateConstraint(List *names, const char *constrName)
|
2011-06-02 00:43:50 +02:00
|
|
|
{
|
|
|
|
TypeName *typename;
|
|
|
|
Oid domainoid;
|
|
|
|
Relation typrel;
|
|
|
|
Relation conrel;
|
|
|
|
HeapTuple tup;
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
Form_pg_constraint con;
|
2011-06-02 00:43:50 +02:00
|
|
|
Form_pg_constraint copy_con;
|
|
|
|
char *conbin;
|
2011-11-14 18:08:48 +01:00
|
|
|
SysScanDesc scan;
|
2011-06-02 00:43:50 +02:00
|
|
|
Datum val;
|
|
|
|
bool isnull;
|
|
|
|
HeapTuple tuple;
|
|
|
|
HeapTuple copyTuple;
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
ScanKeyData skey[3];
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2011-06-02 00:43:50 +02:00
|
|
|
|
|
|
|
/* Make a TypeName so we can use standard type lookup machinery */
|
|
|
|
typename = makeTypeNameFromNameList(names);
|
|
|
|
domainoid = typenameTypeId(NULL, typename);
|
|
|
|
|
|
|
|
/* Look up the domain in the type table */
|
2019-01-21 19:32:19 +01:00
|
|
|
typrel = table_open(TypeRelationId, AccessShareLock);
|
2011-06-02 00:43:50 +02:00
|
|
|
|
|
|
|
tup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(domainoid));
|
|
|
|
if (!HeapTupleIsValid(tup))
|
|
|
|
elog(ERROR, "cache lookup failed for type %u", domainoid);
|
|
|
|
|
|
|
|
/* Check it's a domain and check user has permission for ALTER DOMAIN */
|
|
|
|
checkDomainOwner(tup);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Find and check the target constraint
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
conrel = table_open(ConstraintRelationId, RowExclusiveLock);
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
|
|
|
|
ScanKeyInit(&skey[0],
|
|
|
|
Anum_pg_constraint_conrelid,
|
|
|
|
BTEqualStrategyNumber, F_OIDEQ,
|
|
|
|
ObjectIdGetDatum(InvalidOid));
|
|
|
|
ScanKeyInit(&skey[1],
|
2011-06-02 00:43:50 +02:00
|
|
|
Anum_pg_constraint_contypid,
|
|
|
|
BTEqualStrategyNumber, F_OIDEQ,
|
|
|
|
ObjectIdGetDatum(domainoid));
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
ScanKeyInit(&skey[2],
|
|
|
|
Anum_pg_constraint_conname,
|
|
|
|
BTEqualStrategyNumber, F_NAMEEQ,
|
|
|
|
CStringGetDatum(constrName));
|
2011-06-02 00:43:50 +02:00
|
|
|
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
scan = systable_beginscan(conrel, ConstraintRelidTypidNameIndexId, true,
|
|
|
|
NULL, 3, skey);
|
2011-06-02 00:43:50 +02:00
|
|
|
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
/* There can be at most one matching row */
|
|
|
|
if (!HeapTupleIsValid(tuple = systable_getnext(scan)))
|
2011-06-02 00:43:50 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
|
|
|
errmsg("constraint \"%s\" of domain \"%s\" does not exist",
|
2011-07-04 20:28:05 +02:00
|
|
|
constrName, TypeNameToString(typename))));
|
2011-06-02 00:43:50 +02:00
|
|
|
|
Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 19:45:35 +02:00
|
|
|
con = (Form_pg_constraint) GETSTRUCT(tuple);
|
2011-06-02 00:43:50 +02:00
|
|
|
if (con->contype != CONSTRAINT_CHECK)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("constraint \"%s\" of domain \"%s\" is not a check constraint",
|
|
|
|
constrName, TypeNameToString(typename))));
|
2011-06-02 00:43:50 +02:00
|
|
|
|
|
|
|
val = SysCacheGetAttr(CONSTROID, tuple,
|
|
|
|
Anum_pg_constraint_conbin,
|
|
|
|
&isnull);
|
|
|
|
if (isnull)
|
|
|
|
elog(ERROR, "null conbin for constraint %u",
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
con->oid);
|
2011-06-02 00:43:50 +02:00
|
|
|
conbin = TextDatumGetCString(val);
|
|
|
|
|
|
|
|
validateDomainConstraint(domainoid, conbin);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now update the catalog, while we have the door open.
|
|
|
|
*/
|
|
|
|
copyTuple = heap_copytuple(tuple);
|
|
|
|
copy_con = (Form_pg_constraint) GETSTRUCT(copyTuple);
|
|
|
|
copy_con->convalidated = true;
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(conrel, ©Tuple->t_self, copyTuple);
|
2013-03-18 03:55:14 +01:00
|
|
|
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
InvokeObjectPostAlterHook(ConstraintRelationId, con->oid, 0);
|
2013-03-18 03:55:14 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddressSet(address, TypeRelationId, domainoid);
|
|
|
|
|
2011-06-02 00:43:50 +02:00
|
|
|
heap_freetuple(copyTuple);
|
|
|
|
|
|
|
|
systable_endscan(scan);
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(typrel, AccessShareLock);
|
|
|
|
table_close(conrel, RowExclusiveLock);
|
2011-06-02 00:43:50 +02:00
|
|
|
|
|
|
|
ReleaseSysCache(tup);
|
2012-12-29 13:55:37 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2011-06-02 00:43:50 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
validateDomainConstraint(Oid domainoid, char *ccbin)
|
|
|
|
{
|
|
|
|
Expr *expr = (Expr *) stringToNode(ccbin);
|
|
|
|
List *rels;
|
|
|
|
ListCell *rt;
|
|
|
|
EState *estate;
|
|
|
|
ExprContext *econtext;
|
|
|
|
ExprState *exprstate;
|
2002-12-12 21:35:16 +01:00
|
|
|
|
2002-12-15 17:17:59 +01:00
|
|
|
/* Need an EState to run ExecEvalExpr */
|
|
|
|
estate = CreateExecutorState();
|
|
|
|
econtext = GetPerTupleExprContext(estate);
|
|
|
|
|
|
|
|
/* build execution state for expr */
|
|
|
|
exprstate = ExecPrepareExpr(expr, estate);
|
2002-12-12 21:35:16 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
/* Fetch relation list with attributes based on this domain */
|
|
|
|
/* ShareLock is sufficient to prevent concurrent data changes */
|
|
|
|
|
|
|
|
rels = get_rels_with_domain(domainoid, ShareLock);
|
2002-12-12 16:49:42 +01:00
|
|
|
|
2003-08-04 02:43:34 +02:00
|
|
|
foreach(rt, rels)
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
2003-01-04 01:46:08 +01:00
|
|
|
RelToCheck *rtc = (RelToCheck *) lfirst(rt);
|
|
|
|
Relation testrel = rtc->rel;
|
|
|
|
TupleDesc tupdesc = RelationGetDescr(testrel);
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
TupleTableSlot *slot;
|
|
|
|
TableScanDesc scan;
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
Snapshot snapshot;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
/* Scan all tuples in this relation */
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
snapshot = RegisterSnapshot(GetLatestSnapshot());
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
scan = table_beginscan(testrel, snapshot, 0, NULL);
|
|
|
|
slot = table_slot_create(testrel, NULL);
|
|
|
|
while (table_scan_getnextslot(scan, ForwardScanDirection, slot))
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
int i;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
/* Test attributes that are of the domain */
|
2002-12-06 06:00:34 +01:00
|
|
|
for (i = 0; i < rtc->natts; i++)
|
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
int attnum = rtc->atts[i];
|
|
|
|
Datum d;
|
|
|
|
bool isNull;
|
|
|
|
Datum conResult;
|
2017-08-20 20:19:07 +02:00
|
|
|
Form_pg_attribute attr = TupleDescAttr(tupdesc, attnum - 1);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
d = slot_getattr(slot, attnum, &isNull);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
econtext->domainValue_datum = d;
|
|
|
|
econtext->domainValue_isNull = isNull;
|
|
|
|
|
2002-12-15 17:17:59 +01:00
|
|
|
conResult = ExecEvalExprSwitchContext(exprstate,
|
|
|
|
econtext,
|
2017-01-19 23:12:38 +01:00
|
|
|
&isNull);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2002-12-12 21:35:16 +01:00
|
|
|
if (!isNull && !DatumGetBool(conResult))
|
Provide database object names as separate fields in error messages.
This patch addresses the problem that applications currently have to
extract object names from possibly-localized textual error messages,
if they want to know for example which index caused a UNIQUE_VIOLATION
failure. It adds new error message fields to the wire protocol, which
can carry the name of a table, table column, data type, or constraint
associated with the error. (Since the protocol spec has always instructed
clients to ignore unrecognized field types, this should not create any
compatibility problem.)
Support for providing these new fields has been added to just a limited set
of error reports (mainly, those in the "integrity constraint violation"
SQLSTATE class), but we will doubtless add them to more calls in future.
Pavel Stehule, reviewed and extensively revised by Peter Geoghegan, with
additional hacking by Tom Lane.
2013-01-29 23:06:26 +01:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* In principle the auxiliary information for this error
|
|
|
|
* should be errdomainconstraint(), but errtablecol()
|
2014-05-06 18:12:18 +02:00
|
|
|
* seems considerably more useful in practice. Since this
|
Provide database object names as separate fields in error messages.
This patch addresses the problem that applications currently have to
extract object names from possibly-localized textual error messages,
if they want to know for example which index caused a UNIQUE_VIOLATION
failure. It adds new error message fields to the wire protocol, which
can carry the name of a table, table column, data type, or constraint
associated with the error. (Since the protocol spec has always instructed
clients to ignore unrecognized field types, this should not create any
compatibility problem.)
Support for providing these new fields has been added to just a limited set
of error reports (mainly, those in the "integrity constraint violation"
SQLSTATE class), but we will doubtless add them to more calls in future.
Pavel Stehule, reviewed and extensively revised by Peter Geoghegan, with
additional hacking by Tom Lane.
2013-01-29 23:06:26 +01:00
|
|
|
* code only executes in an ALTER DOMAIN command, the
|
|
|
|
* client should already know which domain is in question,
|
|
|
|
* and which constraint too.
|
|
|
|
*/
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_CHECK_VIOLATION),
|
2003-09-29 02:05:25 +02:00
|
|
|
errmsg("column \"%s\" of table \"%s\" contains values that violate the new constraint",
|
2017-08-20 20:19:07 +02:00
|
|
|
NameStr(attr->attname),
|
Provide database object names as separate fields in error messages.
This patch addresses the problem that applications currently have to
extract object names from possibly-localized textual error messages,
if they want to know for example which index caused a UNIQUE_VIOLATION
failure. It adds new error message fields to the wire protocol, which
can carry the name of a table, table column, data type, or constraint
associated with the error. (Since the protocol spec has always instructed
clients to ignore unrecognized field types, this should not create any
compatibility problem.)
Support for providing these new fields has been added to just a limited set
of error reports (mainly, those in the "integrity constraint violation"
SQLSTATE class), but we will doubtless add them to more calls in future.
Pavel Stehule, reviewed and extensively revised by Peter Geoghegan, with
additional hacking by Tom Lane.
2013-01-29 23:06:26 +01:00
|
|
|
RelationGetRelationName(testrel)),
|
|
|
|
errtablecol(testrel, attnum)));
|
|
|
|
}
|
2002-12-06 06:00:34 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
ResetExprContext(econtext);
|
|
|
|
}
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
ExecDropSingleTupleTableSlot(slot);
|
|
|
|
table_endscan(scan);
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
UnregisterSnapshot(snapshot);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2002-12-12 21:35:16 +01:00
|
|
|
/* Hold relation lock till commit (XXX bad for concurrency) */
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(testrel, NoLock);
|
2002-12-06 06:00:34 +01:00
|
|
|
}
|
|
|
|
|
2002-12-15 17:17:59 +01:00
|
|
|
FreeExecutorState(estate);
|
2002-12-06 06:00:34 +01:00
|
|
|
}
|
2011-11-14 18:08:48 +01:00
|
|
|
|
2002-12-06 06:00:34 +01:00
|
|
|
/*
|
|
|
|
* get_rels_with_domain
|
|
|
|
*
|
|
|
|
* Fetch all relations / attributes which are using the domain
|
2003-01-04 01:46:08 +01:00
|
|
|
*
|
|
|
|
* The result is a list of RelToCheck structs, one for each distinct
|
|
|
|
* relation, each containing one or more attribute numbers that are of
|
|
|
|
* the domain type. We have opened each rel and acquired the specified lock
|
|
|
|
* type on it.
|
|
|
|
*
|
2007-05-11 22:17:15 +02:00
|
|
|
* We support nested domains by including attributes that are of derived
|
|
|
|
* domain types. Current callers do not need to distinguish between attributes
|
|
|
|
* that are of exactly the given domain and those that are of derived domains.
|
|
|
|
*
|
2003-01-04 01:46:08 +01:00
|
|
|
* XXX this is completely broken because there is no way to lock the domain
|
|
|
|
* to prevent columns from being added or dropped while our command runs.
|
|
|
|
* We can partially protect against column drops by locking relations as we
|
|
|
|
* come across them, but there is still a race condition (the window between
|
|
|
|
* seeing a pg_depend entry and acquiring lock on the relation it references).
|
|
|
|
* Also, holding locks on all these relations simultaneously creates a non-
|
|
|
|
* trivial risk of deadlock. We can minimize but not eliminate the deadlock
|
|
|
|
* risk by using the weakest suitable lock (ShareLock for most callers).
|
|
|
|
*
|
2007-05-11 22:17:15 +02:00
|
|
|
* XXX the API for this is not sufficient to support checking domain values
|
2017-08-09 23:03:09 +02:00
|
|
|
* that are inside container types, such as composite types, arrays, or
|
|
|
|
* ranges. Currently we just error out if a container type containing the
|
|
|
|
* target domain is stored anywhere.
|
2002-12-06 06:00:34 +01:00
|
|
|
*
|
|
|
|
* Generally used for retrieving a list of tests when adding
|
|
|
|
* new constraints to a domain.
|
|
|
|
*/
|
2003-01-04 01:46:08 +01:00
|
|
|
static List *
|
|
|
|
get_rels_with_domain(Oid domainOid, LOCKMODE lockmode)
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
List *result = NIL;
|
2017-08-09 23:03:09 +02:00
|
|
|
char *domainTypeName = format_type_be(domainOid);
|
2003-01-04 01:46:08 +01:00
|
|
|
Relation depRel;
|
|
|
|
ScanKeyData key[2];
|
|
|
|
SysScanDesc depScan;
|
|
|
|
HeapTuple depTup;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2006-07-31 22:09:10 +02:00
|
|
|
Assert(lockmode != NoLock);
|
|
|
|
|
2017-08-09 23:03:09 +02:00
|
|
|
/* since this function recurses, it could be driven to stack overflow */
|
|
|
|
check_stack_depth();
|
|
|
|
|
2002-12-06 06:00:34 +01:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* We scan pg_depend to find those things that depend on the domain. (We
|
|
|
|
* assume we can ignore refobjsubid for a domain.)
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
depRel = table_open(DependRelationId, AccessShareLock);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-11-12 22:15:59 +01:00
|
|
|
ScanKeyInit(&key[0],
|
|
|
|
Anum_pg_depend_refclassid,
|
|
|
|
BTEqualStrategyNumber, F_OIDEQ,
|
2005-04-14 03:38:22 +02:00
|
|
|
ObjectIdGetDatum(TypeRelationId));
|
2003-11-12 22:15:59 +01:00
|
|
|
ScanKeyInit(&key[1],
|
|
|
|
Anum_pg_depend_refobjid,
|
|
|
|
BTEqualStrategyNumber, F_OIDEQ,
|
|
|
|
ObjectIdGetDatum(domainOid));
|
2003-01-04 01:46:08 +01:00
|
|
|
|
2005-04-14 22:03:27 +02:00
|
|
|
depScan = systable_beginscan(depRel, DependReferenceIndexId, true,
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
NULL, 2, key);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
while (HeapTupleIsValid(depTup = systable_getnext(depScan)))
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
Form_pg_depend pg_depend = (Form_pg_depend) GETSTRUCT(depTup);
|
2003-01-04 01:46:08 +01:00
|
|
|
RelToCheck *rtc = NULL;
|
2004-05-26 06:41:50 +02:00
|
|
|
ListCell *rellist;
|
2003-08-04 02:43:34 +02:00
|
|
|
Form_pg_attribute pg_att;
|
2003-01-04 01:46:08 +01:00
|
|
|
int ptr;
|
|
|
|
|
2017-08-09 23:03:09 +02:00
|
|
|
/* Check for directly dependent types */
|
2007-05-11 22:17:15 +02:00
|
|
|
if (pg_depend->classid == TypeRelationId)
|
|
|
|
{
|
2017-08-09 23:03:09 +02:00
|
|
|
if (get_typtype(pg_depend->objid) == TYPTYPE_DOMAIN)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* This is a sub-domain, so recursively add dependent columns
|
|
|
|
* to the output list. This is a bit inefficient since we may
|
|
|
|
* fail to combine RelToCheck entries when attributes of the
|
|
|
|
* same rel have different derived domain types, but it's
|
|
|
|
* probably not worth improving.
|
|
|
|
*/
|
|
|
|
result = list_concat(result,
|
|
|
|
get_rels_with_domain(pg_depend->objid,
|
|
|
|
lockmode));
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Otherwise, it is some container type using the domain, so
|
|
|
|
* fail if there are any columns of this type.
|
|
|
|
*/
|
|
|
|
find_composite_type_dependencies(pg_depend->objid,
|
|
|
|
NULL,
|
|
|
|
domainTypeName);
|
|
|
|
}
|
2007-05-11 22:17:15 +02:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Else, ignore dependees that aren't user columns of relations */
|
2003-01-04 01:46:08 +01:00
|
|
|
/* (we assume system columns are never of domain types) */
|
2005-04-14 03:38:22 +02:00
|
|
|
if (pg_depend->classid != RelationRelationId ||
|
2003-01-04 01:46:08 +01:00
|
|
|
pg_depend->objsubid <= 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* See if we already have an entry for this relation */
|
|
|
|
foreach(rellist, result)
|
|
|
|
{
|
|
|
|
RelToCheck *rt = (RelToCheck *) lfirst(rellist);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
if (RelationGetRelid(rt->rel) == pg_depend->objid)
|
|
|
|
{
|
|
|
|
rtc = rt;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
if (rtc == NULL)
|
|
|
|
{
|
|
|
|
/* First attribute found for this relation */
|
|
|
|
Relation rel;
|
|
|
|
|
|
|
|
/* Acquire requested lock on relation */
|
2004-05-05 19:06:56 +02:00
|
|
|
rel = relation_open(pg_depend->objid, lockmode);
|
|
|
|
|
2007-05-11 22:17:15 +02:00
|
|
|
/*
|
|
|
|
* Check to see if rowtype is stored anyplace as a composite-type
|
|
|
|
* column; if so we have to fail, for now anyway.
|
|
|
|
*/
|
|
|
|
if (OidIsValid(rel->rd_rel->reltype))
|
|
|
|
find_composite_type_dependencies(rel->rd_rel->reltype,
|
2011-02-11 14:47:38 +01:00
|
|
|
NULL,
|
2017-08-09 23:03:09 +02:00
|
|
|
domainTypeName);
|
2007-05-11 22:17:15 +02:00
|
|
|
|
2013-07-05 21:25:51 +02:00
|
|
|
/*
|
|
|
|
* Otherwise, we can ignore relations except those with both
|
|
|
|
* storage and user-chosen column types.
|
|
|
|
*
|
|
|
|
* XXX If an index-only scan could satisfy "col::some_domain" from
|
|
|
|
* a suitable expression index, this should also check expression
|
|
|
|
* index columns.
|
|
|
|
*/
|
2013-03-04 01:23:31 +01:00
|
|
|
if (rel->rd_rel->relkind != RELKIND_RELATION &&
|
|
|
|
rel->rd_rel->relkind != RELKIND_MATVIEW)
|
2004-05-05 19:06:56 +02:00
|
|
|
{
|
|
|
|
relation_close(rel, lockmode);
|
|
|
|
continue;
|
|
|
|
}
|
2003-01-04 01:46:08 +01:00
|
|
|
|
|
|
|
/* Build the RelToCheck entry with enough space for all atts */
|
|
|
|
rtc = (RelToCheck *) palloc(sizeof(RelToCheck));
|
|
|
|
rtc->rel = rel;
|
|
|
|
rtc->natts = 0;
|
|
|
|
rtc->atts = (int *) palloc(sizeof(int) * RelationGetNumberOfAttributes(rel));
|
Avoid using lcons and list_delete_first where it's easy to do so.
Formerly, lcons was about the same speed as lappend, but with the new
List implementation, that's not so; with a long List, data movement
imposes an O(N) cost on lcons and list_delete_first, but not lappend.
Hence, invent list_delete_last with semantics parallel to
list_delete_first (but O(1) cost), and change various places to use
lappend and list_delete_last where this can be done without much
violence to the code logic.
There are quite a few places that construct result lists using lcons not
lappend. Some have semantic rationales for that; I added comments about
it to a couple that didn't have them already. In many such places though,
I think the coding is that way only because back in the dark ages lcons
was faster than lappend. Hence, switch to lappend where this can be done
without causing semantic changes.
In ExecInitExprRec(), this results in aggregates and window functions that
are in the same plan node being executed in a different order than before.
Generally, the executions of such functions ought to be independent of
each other, so this shouldn't result in visibly different query results.
But if you push it, as one regression test case does, you can show that
the order is different. The new order seems saner; it's closer to
the order of the functions in the query text. And we never documented
or promised anything about this, anyway.
Also, in gistfinishsplit(), don't bother building a reverse-order list;
it's easy now to iterate backwards through the original list.
It'd be possible to go further towards removing uses of lcons and
list_delete_first, but it'd require more extensive logic changes,
and I'm not convinced it's worth it. Most of the remaining uses
deal with queues that probably never get long enough to be worth
sweating over. (Actually, I doubt that any of the changes in this
patch will have measurable performance effects either. But better
to have good examples than bad ones in the code base.)
Patch by me, thanks to David Rowley and Daniel Gustafsson for review.
Discussion: https://postgr.es/m/21272.1563318411@sss.pgh.pa.us
2019-07-17 17:15:28 +02:00
|
|
|
result = lappend(result, rtc);
|
2003-01-04 01:46:08 +01:00
|
|
|
}
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Confirm column has not been dropped, and is of the expected type.
|
2012-04-24 04:43:09 +02:00
|
|
|
* This defends against an ALTER DROP COLUMN occurring just before we
|
2005-10-15 04:49:52 +02:00
|
|
|
* acquired lock ... but if the whole table were dropped, we'd still
|
|
|
|
* have a problem.
|
2003-01-04 01:46:08 +01:00
|
|
|
*/
|
|
|
|
if (pg_depend->objsubid > RelationGetNumberOfAttributes(rtc->rel))
|
|
|
|
continue;
|
2017-08-20 20:19:07 +02:00
|
|
|
pg_att = TupleDescAttr(rtc->rel->rd_att, pg_depend->objsubid - 1);
|
2003-01-04 01:46:08 +01:00
|
|
|
if (pg_att->attisdropped || pg_att->atttypid != domainOid)
|
|
|
|
continue;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
/*
|
2014-05-06 18:12:18 +02:00
|
|
|
* Okay, add column to result. We store the columns in column-number
|
2005-10-15 04:49:52 +02:00
|
|
|
* order; this is just a hack to improve predictability of regression
|
|
|
|
* test output ...
|
2003-01-04 01:46:08 +01:00
|
|
|
*/
|
|
|
|
Assert(rtc->natts < RelationGetNumberOfAttributes(rtc->rel));
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
ptr = rtc->natts++;
|
2003-08-04 02:43:34 +02:00
|
|
|
while (ptr > 0 && rtc->atts[ptr - 1] > pg_depend->objsubid)
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
rtc->atts[ptr] = rtc->atts[ptr - 1];
|
2003-01-04 01:46:08 +01:00
|
|
|
ptr--;
|
2002-12-06 06:00:34 +01:00
|
|
|
}
|
2003-01-04 01:46:08 +01:00
|
|
|
rtc->atts[ptr] = pg_depend->objsubid;
|
2002-12-06 06:00:34 +01:00
|
|
|
}
|
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
systable_endscan(depScan);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
relation_close(depRel, AccessShareLock);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
return result;
|
2002-12-06 06:00:34 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-03-14 23:48:25 +01:00
|
|
|
* checkDomainOwner
|
2002-12-06 06:00:34 +01:00
|
|
|
*
|
2006-03-14 23:48:25 +01:00
|
|
|
* Check that the type is actually a domain and that the current user
|
|
|
|
* has permission to do ALTER DOMAIN on it. Throw an error if not.
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
2012-04-03 07:11:51 +02:00
|
|
|
void
|
2010-10-25 05:04:37 +02:00
|
|
|
checkDomainOwner(HeapTuple tup)
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
2003-08-04 02:43:34 +02:00
|
|
|
Form_pg_type typTup = (Form_pg_type) GETSTRUCT(tup);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/* Check that this is actually a domain */
|
2007-04-02 05:49:42 +02:00
|
|
|
if (typTup->typtype != TYPTYPE_DOMAIN)
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
2010-10-25 05:04:37 +02:00
|
|
|
errmsg("%s is not a domain",
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
format_type_be(typTup->oid))));
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2003-01-04 01:46:08 +01:00
|
|
|
/* Permission check: must own type */
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
if (!pg_type_ownercheck(typTup->oid, GetUserId()))
|
|
|
|
aclcheck_error_type(ACLCHECK_NOT_OWNER, typTup->oid);
|
2003-01-04 01:46:08 +01:00
|
|
|
}
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/*
|
2002-12-12 21:35:16 +01:00
|
|
|
* domainAddConstraint - code shared between CREATE and ALTER DOMAIN
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
2002-12-12 21:35:16 +01:00
|
|
|
static char *
|
2002-12-06 06:00:34 +01:00
|
|
|
domainAddConstraint(Oid domainOid, Oid domainNamespace, Oid baseTypeOid,
|
2002-12-12 21:35:16 +01:00
|
|
|
int typMod, Constraint *constr,
|
2017-10-31 15:34:31 +01:00
|
|
|
const char *domainName, ObjectAddress *constrAddr)
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
|
|
|
Node *expr;
|
|
|
|
char *ccbin;
|
|
|
|
ParseState *pstate;
|
2003-08-04 02:43:34 +02:00
|
|
|
CoerceToDomainValue *domVal;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
Oid ccoid;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Assign or validate constraint name
|
|
|
|
*/
|
2009-07-30 04:45:38 +02:00
|
|
|
if (constr->conname)
|
2002-12-06 06:00:34 +01:00
|
|
|
{
|
|
|
|
if (ConstraintNameIsUsed(CONSTRAINT_DOMAIN,
|
|
|
|
domainOid,
|
2009-07-30 04:45:38 +02:00
|
|
|
constr->conname))
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("constraint \"%s\" for domain \"%s\" already exists",
|
|
|
|
constr->conname, domainName)));
|
2002-12-06 06:00:34 +01:00
|
|
|
}
|
|
|
|
else
|
2009-07-30 04:45:38 +02:00
|
|
|
constr->conname = ChooseConstraintName(domainName,
|
|
|
|
NULL,
|
|
|
|
"check",
|
|
|
|
domainNamespace,
|
|
|
|
NIL);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/*
|
2002-12-12 21:35:16 +01:00
|
|
|
* Convert the A_EXPR in raw_expr into an EXPR
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
|
|
|
pstate = make_parsestate(NULL);
|
|
|
|
|
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Set up a CoerceToDomainValue to represent the occurrence of VALUE in
|
2014-05-06 18:12:18 +02:00
|
|
|
* the expression. Note that it will appear to have the type of the base
|
2005-10-15 04:49:52 +02:00
|
|
|
* type, not the domain. This seems correct since within the check
|
|
|
|
* expression, we should not assume the input value can be considered a
|
|
|
|
* member of the domain.
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
2003-02-03 22:15:45 +01:00
|
|
|
domVal = makeNode(CoerceToDomainValue);
|
2002-12-06 06:00:34 +01:00
|
|
|
domVal->typeId = baseTypeOid;
|
|
|
|
domVal->typeMod = typMod;
|
2011-03-20 01:29:08 +01:00
|
|
|
domVal->collation = get_typcollation(baseTypeOid);
|
2008-08-29 01:09:48 +02:00
|
|
|
domVal->location = -1; /* will be set when/if used */
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2017-01-07 22:02:16 +01:00
|
|
|
pstate->p_pre_columnref_hook = replace_domain_constraint_value;
|
|
|
|
pstate->p_ref_hook_state = (void *) domVal;
|
2002-12-06 06:00:34 +01:00
|
|
|
|
Centralize the logic for detecting misplaced aggregates, window funcs, etc.
Formerly we relied on checking after-the-fact to see if an expression
contained aggregates, window functions, or sub-selects when it shouldn't.
This is grotty, easily forgotten (indeed, we had forgotten to teach
DefineIndex about rejecting window functions), and none too efficient
since it requires extra traversals of the parse tree. To improve matters,
define an enum type that classifies all SQL sub-expressions, store it in
ParseState to show what kind of expression we are currently parsing, and
make transformAggregateCall, transformWindowFuncCall, and transformSubLink
check the expression type and throw error if the type indicates the
construct is disallowed. This allows removal of a large number of ad-hoc
checks scattered around the code base. The enum type is sufficiently
fine-grained that we can still produce error messages of at least the
same specificity as before.
Bringing these error checks together revealed that we'd been none too
consistent about phrasing of the error messages, so standardize the wording
a bit.
Also, rewrite checking of aggregate arguments so that it requires only one
traversal of the arguments, rather than up to three as before.
In passing, clean up some more comments left over from add_missing_from
support, and annotate some tests that I think are dead code now that that's
gone. (I didn't risk actually removing said dead code, though.)
2012-08-10 17:35:33 +02:00
|
|
|
expr = transformExpr(pstate, constr->raw_expr, EXPR_KIND_DOMAIN_CHECK);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Make sure it yields a boolean result.
|
|
|
|
*/
|
2003-04-30 00:13:11 +02:00
|
|
|
expr = coerce_to_boolean(pstate, expr, "CHECK");
|
2002-12-06 06:00:34 +01:00
|
|
|
|
2011-03-20 01:29:08 +01:00
|
|
|
/*
|
|
|
|
* Fix up collation information.
|
|
|
|
*/
|
|
|
|
assign_expr_collations(pstate, expr);
|
|
|
|
|
2002-12-06 06:00:34 +01:00
|
|
|
/*
|
Centralize the logic for detecting misplaced aggregates, window funcs, etc.
Formerly we relied on checking after-the-fact to see if an expression
contained aggregates, window functions, or sub-selects when it shouldn't.
This is grotty, easily forgotten (indeed, we had forgotten to teach
DefineIndex about rejecting window functions), and none too efficient
since it requires extra traversals of the parse tree. To improve matters,
define an enum type that classifies all SQL sub-expressions, store it in
ParseState to show what kind of expression we are currently parsing, and
make transformAggregateCall, transformWindowFuncCall, and transformSubLink
check the expression type and throw error if the type indicates the
construct is disallowed. This allows removal of a large number of ad-hoc
checks scattered around the code base. The enum type is sufficiently
fine-grained that we can still produce error messages of at least the
same specificity as before.
Bringing these error checks together revealed that we'd been none too
consistent about phrasing of the error messages, so standardize the wording
a bit.
Also, rewrite checking of aggregate arguments so that it requires only one
traversal of the arguments, rather than up to three as before.
In passing, clean up some more comments left over from add_missing_from
support, and annotate some tests that I think are dead code now that that's
gone. (I didn't risk actually removing said dead code, though.)
2012-08-10 17:35:33 +02:00
|
|
|
* Domains don't allow variables (this is probably dead code now that
|
|
|
|
* add_missing_from is history, but let's be sure).
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
Centralize the logic for detecting misplaced aggregates, window funcs, etc.
Formerly we relied on checking after-the-fact to see if an expression
contained aggregates, window functions, or sub-selects when it shouldn't.
This is grotty, easily forgotten (indeed, we had forgotten to teach
DefineIndex about rejecting window functions), and none too efficient
since it requires extra traversals of the parse tree. To improve matters,
define an enum type that classifies all SQL sub-expressions, store it in
ParseState to show what kind of expression we are currently parsing, and
make transformAggregateCall, transformWindowFuncCall, and transformSubLink
check the expression type and throw error if the type indicates the
construct is disallowed. This allows removal of a large number of ad-hoc
checks scattered around the code base. The enum type is sufficiently
fine-grained that we can still produce error messages of at least the
same specificity as before.
Bringing these error checks together revealed that we'd been none too
consistent about phrasing of the error messages, so standardize the wording
a bit.
Also, rewrite checking of aggregate arguments so that it requires only one
traversal of the arguments, rather than up to three as before.
In passing, clean up some more comments left over from add_missing_from
support, and annotate some tests that I think are dead code now that that's
gone. (I didn't risk actually removing said dead code, though.)
2012-08-10 17:35:33 +02:00
|
|
|
if (list_length(pstate->p_rtable) != 0 ||
|
|
|
|
contain_var_clause(expr))
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_COLUMN_REFERENCE),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("cannot use table references in domain check constraint")));
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/*
|
2002-12-12 16:49:42 +01:00
|
|
|
* Convert to string form for storage.
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
|
|
|
ccbin = nodeToString(expr);
|
|
|
|
|
2002-12-12 21:35:16 +01:00
|
|
|
/*
|
|
|
|
* Store the constraint in pg_constraint
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ccoid =
|
|
|
|
CreateConstraintEntry(constr->conname, /* Constraint Name */
|
|
|
|
domainNamespace, /* namespace */
|
|
|
|
CONSTRAINT_CHECK, /* Constraint Type */
|
|
|
|
false, /* Is Deferrable */
|
|
|
|
false, /* Is Deferred */
|
|
|
|
!constr->skip_validation, /* Is Validated */
|
2018-03-23 14:48:22 +01:00
|
|
|
InvalidOid, /* no parent constraint */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
InvalidOid, /* not a relation constraint */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
NULL,
|
|
|
|
0,
|
2018-04-07 22:00:39 +02:00
|
|
|
0,
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
domainOid, /* domain constraint */
|
|
|
|
InvalidOid, /* no associated index */
|
|
|
|
InvalidOid, /* Foreign key fields */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
0,
|
|
|
|
' ',
|
|
|
|
' ',
|
|
|
|
' ',
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
NULL, /* not an exclusion constraint */
|
|
|
|
expr, /* Tree form of check constraint */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ccbin, /* Binary form of check constraint */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
true, /* is local */
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
0, /* inhcount */
|
|
|
|
false, /* connoinherit */
|
|
|
|
false); /* is_internal */
|
|
|
|
if (constrAddr)
|
|
|
|
ObjectAddressSet(*constrAddr, ConstraintRelationId, ccoid);
|
2002-12-06 06:00:34 +01:00
|
|
|
|
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Return the compiled constraint expression so the calling routine can
|
|
|
|
* perform any additional required tests.
|
2002-12-06 06:00:34 +01:00
|
|
|
*/
|
|
|
|
return ccbin;
|
|
|
|
}
|
2003-01-06 01:31:45 +01:00
|
|
|
|
2017-01-07 22:02:16 +01:00
|
|
|
/* Parser pre_columnref_hook for domain CHECK constraint parsing */
|
|
|
|
static Node *
|
|
|
|
replace_domain_constraint_value(ParseState *pstate, ColumnRef *cref)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Check for a reference to "value", and if that's what it is, replace
|
|
|
|
* with a CoerceToDomainValue as prepared for us by domainAddConstraint.
|
|
|
|
* (We handle VALUE as a name, not a keyword, to avoid breaking a lot of
|
|
|
|
* applications that have used VALUE as a column name in the past.)
|
|
|
|
*/
|
|
|
|
if (list_length(cref->fields) == 1)
|
|
|
|
{
|
|
|
|
Node *field1 = (Node *) linitial(cref->fields);
|
|
|
|
char *colname;
|
|
|
|
|
|
|
|
Assert(IsA(field1, String));
|
|
|
|
colname = strVal(field1);
|
|
|
|
if (strcmp(colname, "value") == 0)
|
|
|
|
{
|
|
|
|
CoerceToDomainValue *domVal = copyObject(pstate->p_ref_hook_state);
|
|
|
|
|
|
|
|
/* Propagate location knowledge, if any */
|
|
|
|
domVal->location = cref->location;
|
|
|
|
return (Node *) domVal;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2008-03-19 19:38:30 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Execute ALTER TYPE RENAME
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2011-12-22 21:43:56 +01:00
|
|
|
RenameType(RenameStmt *stmt)
|
2008-03-19 19:38:30 +01:00
|
|
|
{
|
Remove objname/objargs split for referring to objects
In simpler times, it might have worked to refer to all kinds of objects
by a list of name components and an optional argument list. But this
doesn't work for all objects, which has resulted in a collection of
hacks to place various other nodes types into these fields, which have
to be unpacked at the other end. This makes it also weird to represent
lists of such things in the grammar, because they would have to be lists
of singleton lists, to make the unpacking work consistently. The other
problem is that keeping separate name and args fields makes it awkward
to deal with lists of functions.
Change that by dropping the objargs field and have objname, renamed to
object, be a generic Node, which can then be flexibly assigned and
managed using the normal Node mechanisms. In many cases it will still
be a List of names, in some cases it will be a string Value, for types
it will be the existing Typename, for functions it will now use the
existing ObjectWithArgs node type. Some of the more obscure object
types still use somewhat arbitrary nested lists.
Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-12 18:00:00 +01:00
|
|
|
List *names = castNode(List, stmt->object);
|
2011-12-22 21:43:56 +01:00
|
|
|
const char *newTypeName = stmt->newname;
|
2008-03-19 19:38:30 +01:00
|
|
|
TypeName *typename;
|
|
|
|
Oid typeOid;
|
|
|
|
Relation rel;
|
|
|
|
HeapTuple tup;
|
|
|
|
Form_pg_type typTup;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2008-03-19 19:38:30 +01:00
|
|
|
|
|
|
|
/* Make a TypeName so we can use standard type lookup machinery */
|
|
|
|
typename = makeTypeNameFromNameList(names);
|
2010-10-25 20:40:46 +02:00
|
|
|
typeOid = typenameTypeId(NULL, typename);
|
2008-03-19 19:38:30 +01:00
|
|
|
|
|
|
|
/* Look up the type in the type table */
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(TypeRelationId, RowExclusiveLock);
|
2008-03-19 19:38:30 +01:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCacheCopy1(TYPEOID, ObjectIdGetDatum(typeOid));
|
2008-03-19 19:38:30 +01:00
|
|
|
if (!HeapTupleIsValid(tup))
|
|
|
|
elog(ERROR, "cache lookup failed for type %u", typeOid);
|
|
|
|
typTup = (Form_pg_type) GETSTRUCT(tup);
|
|
|
|
|
|
|
|
/* check permissions on type */
|
|
|
|
if (!pg_type_ownercheck(typeOid, GetUserId()))
|
2012-06-15 21:55:03 +02:00
|
|
|
aclcheck_error_type(ACLCHECK_NOT_OWNER, typeOid);
|
2008-03-19 19:38:30 +01:00
|
|
|
|
2011-12-22 21:43:56 +01:00
|
|
|
/* ALTER DOMAIN used on a non-domain? */
|
|
|
|
if (stmt->renameType == OBJECT_DOMAIN && typTup->typtype != TYPTYPE_DOMAIN)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
2017-05-30 03:48:26 +02:00
|
|
|
errmsg("%s is not a domain",
|
2011-12-22 21:43:56 +01:00
|
|
|
format_type_be(typeOid))));
|
|
|
|
|
2008-03-19 19:38:30 +01:00
|
|
|
/*
|
|
|
|
* If it's a composite type, we need to check that it really is a
|
2009-06-11 16:49:15 +02:00
|
|
|
* free-standing composite type, and not a table's rowtype. We want people
|
|
|
|
* to use ALTER TABLE not ALTER TYPE for that case.
|
2008-03-19 19:38:30 +01:00
|
|
|
*/
|
|
|
|
if (typTup->typtype == TYPTYPE_COMPOSITE &&
|
|
|
|
get_rel_relkind(typTup->typrelid) != RELKIND_COMPOSITE_TYPE)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("%s is a table's row type",
|
|
|
|
format_type_be(typeOid)),
|
|
|
|
errhint("Use ALTER TABLE instead.")));
|
|
|
|
|
|
|
|
/* don't allow direct alteration of array types, either */
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
if (IsTrueArrayType(typTup))
|
2008-03-19 19:38:30 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("cannot alter array type %s",
|
|
|
|
format_type_be(typeOid)),
|
|
|
|
errhint("You can alter type %s, which will alter the array type as well.",
|
|
|
|
format_type_be(typTup->typelem))));
|
|
|
|
|
2009-06-11 16:49:15 +02:00
|
|
|
/*
|
2008-03-19 19:38:30 +01:00
|
|
|
* If type is composite we need to rename associated pg_class entry too.
|
|
|
|
* RenameRelationInternal will call RenameTypeInternal automatically.
|
|
|
|
*/
|
|
|
|
if (typTup->typtype == TYPTYPE_COMPOSITE)
|
2018-10-25 09:33:17 +02:00
|
|
|
RenameRelationInternal(typTup->typrelid, newTypeName, false, false);
|
2008-03-19 19:38:30 +01:00
|
|
|
else
|
|
|
|
RenameTypeInternal(typeOid, newTypeName,
|
|
|
|
typTup->typnamespace);
|
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddressSet(address, TypeRelationId, typeOid);
|
2008-03-19 19:38:30 +01:00
|
|
|
/* Clean up */
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, RowExclusiveLock);
|
2012-12-24 00:25:03 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2008-03-19 19:38:30 +01:00
|
|
|
}
|
|
|
|
|
2003-01-06 01:31:45 +01:00
|
|
|
/*
|
2004-06-25 23:55:59 +02:00
|
|
|
* Change the owner of a type.
|
2003-01-06 01:31:45 +01:00
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2012-01-27 20:20:34 +01:00
|
|
|
AlterTypeOwner(List *names, Oid newOwnerId, ObjectType objecttype)
|
2003-01-06 01:31:45 +01:00
|
|
|
{
|
|
|
|
TypeName *typename;
|
|
|
|
Oid typeOid;
|
|
|
|
Relation rel;
|
|
|
|
HeapTuple tup;
|
2007-11-11 20:22:49 +01:00
|
|
|
HeapTuple newtup;
|
2003-08-04 02:43:34 +02:00
|
|
|
Form_pg_type typTup;
|
2005-07-14 23:46:30 +02:00
|
|
|
AclResult aclresult;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2003-01-06 01:31:45 +01:00
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(TypeRelationId, RowExclusiveLock);
|
2007-11-11 20:22:49 +01:00
|
|
|
|
2003-01-06 01:31:45 +01:00
|
|
|
/* Make a TypeName so we can use standard type lookup machinery */
|
2006-03-14 23:48:25 +01:00
|
|
|
typename = makeTypeNameFromNameList(names);
|
2003-01-06 01:31:45 +01:00
|
|
|
|
2006-03-14 23:48:25 +01:00
|
|
|
/* Use LookupTypeName here so that shell types can be processed */
|
2014-01-23 18:40:29 +01:00
|
|
|
tup = LookupTypeName(NULL, typename, NULL, false);
|
2007-11-11 20:22:49 +01:00
|
|
|
if (tup == NULL)
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
|
|
|
errmsg("type \"%s\" does not exist",
|
|
|
|
TypeNameToString(typename))));
|
2007-11-15 22:14:46 +01:00
|
|
|
typeOid = typeTypeId(tup);
|
2003-01-06 01:31:45 +01:00
|
|
|
|
2007-11-11 20:22:49 +01:00
|
|
|
/* Copy the syscache entry so we can scribble on it below */
|
|
|
|
newtup = heap_copytuple(tup);
|
|
|
|
ReleaseSysCache(tup);
|
|
|
|
tup = newtup;
|
2003-01-06 01:31:45 +01:00
|
|
|
typTup = (Form_pg_type) GETSTRUCT(tup);
|
|
|
|
|
2012-01-27 20:20:34 +01:00
|
|
|
/* Don't allow ALTER DOMAIN on a type */
|
|
|
|
if (objecttype == OBJECT_DOMAIN && typTup->typtype != TYPTYPE_DOMAIN)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("%s is not a domain",
|
|
|
|
format_type_be(typeOid))));
|
|
|
|
|
2004-06-25 23:55:59 +02:00
|
|
|
/*
|
2004-08-29 07:07:03 +02:00
|
|
|
* If it's a composite type, we need to check that it really is a
|
2007-11-15 22:14:46 +01:00
|
|
|
* free-standing composite type, and not a table's rowtype. We want people
|
|
|
|
* to use ALTER TABLE not ALTER TYPE for that case.
|
2004-06-25 23:55:59 +02:00
|
|
|
*/
|
2007-04-02 05:49:42 +02:00
|
|
|
if (typTup->typtype == TYPTYPE_COMPOSITE &&
|
2005-08-04 03:09:29 +02:00
|
|
|
get_rel_relkind(typTup->typrelid) != RELKIND_COMPOSITE_TYPE)
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
2007-09-29 19:18:58 +02:00
|
|
|
errmsg("%s is a table's row type",
|
|
|
|
format_type_be(typeOid)),
|
|
|
|
errhint("Use ALTER TABLE instead.")));
|
2003-01-06 01:31:45 +01:00
|
|
|
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
/* don't allow direct alteration of array types, either */
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
if (IsTrueArrayType(typTup))
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("cannot alter array type %s",
|
|
|
|
format_type_be(typeOid)),
|
|
|
|
errhint("You can alter type %s, which will alter the array type as well.",
|
|
|
|
format_type_be(typTup->typelem))));
|
|
|
|
|
2004-08-29 07:07:03 +02:00
|
|
|
/*
|
2004-06-25 23:55:59 +02:00
|
|
|
* If the new owner is the same as the existing owner, consider the
|
|
|
|
* command to have succeeded. This is for dump restoration purposes.
|
|
|
|
*/
|
2005-06-28 07:09:14 +02:00
|
|
|
if (typTup->typowner != newOwnerId)
|
2004-06-25 23:55:59 +02:00
|
|
|
{
|
2005-08-22 19:38:20 +02:00
|
|
|
/* Superusers can always do it */
|
|
|
|
if (!superuser())
|
|
|
|
{
|
|
|
|
/* Otherwise, must be owner of the existing object */
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
if (!pg_type_ownercheck(typTup->oid, GetUserId()))
|
|
|
|
aclcheck_error_type(ACLCHECK_NOT_OWNER, typTup->oid);
|
2005-08-22 19:38:20 +02:00
|
|
|
|
|
|
|
/* Must be able to become new owner */
|
|
|
|
check_is_member_of_role(GetUserId(), newOwnerId);
|
|
|
|
|
|
|
|
/* New owner must have CREATE privilege on namespace */
|
|
|
|
aclresult = pg_namespace_aclcheck(typTup->typnamespace,
|
|
|
|
newOwnerId,
|
|
|
|
ACL_CREATE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_SCHEMA,
|
2005-08-22 19:38:20 +02:00
|
|
|
get_namespace_name(typTup->typnamespace));
|
|
|
|
}
|
2004-06-25 23:55:59 +02:00
|
|
|
|
Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite. Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.
To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly. AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.
I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised. Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.
Per bug #13666 reported by Chris Pacejo.
Backpatch to 9.5.
(I would back-patch this all the way back, except that it doesn't apply
cleanly in 9.4 and earlier because 59367fdf9 wasn't backpatched. If we
decide that we need this in earlier branches too, we should backpatch
both.)
2015-12-17 18:25:41 +01:00
|
|
|
AlterTypeOwner_oid(typeOid, newOwnerId, true);
|
2004-06-25 23:55:59 +02:00
|
|
|
}
|
2003-01-06 01:31:45 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddressSet(address, TypeRelationId, typeOid);
|
|
|
|
|
2003-01-06 01:31:45 +01:00
|
|
|
/* Clean up */
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, RowExclusiveLock);
|
2012-12-24 00:25:03 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2003-01-06 01:31:45 +01:00
|
|
|
}
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2005-08-04 03:09:29 +02:00
|
|
|
/*
|
Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite. Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.
To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly. AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.
I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised. Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.
Per bug #13666 reported by Chris Pacejo.
Backpatch to 9.5.
(I would back-patch this all the way back, except that it doesn't apply
cleanly in 9.4 and earlier because 59367fdf9 wasn't backpatched. If we
decide that we need this in earlier branches too, we should backpatch
both.)
2015-12-17 18:25:41 +01:00
|
|
|
* AlterTypeOwner_oid - change type owner unconditionally
|
|
|
|
*
|
|
|
|
* This function recurses to handle a pg_class entry, if necessary. It
|
2017-08-16 06:22:32 +02:00
|
|
|
* invokes any necessary access object hooks. If hasDependEntry is true, this
|
Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite. Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.
To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly. AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.
I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised. Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.
Per bug #13666 reported by Chris Pacejo.
Backpatch to 9.5.
(I would back-patch this all the way back, except that it doesn't apply
cleanly in 9.4 and earlier because 59367fdf9 wasn't backpatched. If we
decide that we need this in earlier branches too, we should backpatch
both.)
2015-12-17 18:25:41 +01:00
|
|
|
* function modifies the pg_shdepend entry appropriately (this should be
|
2017-08-16 06:22:32 +02:00
|
|
|
* passed as false only for table rowtypes and array types).
|
2005-08-04 03:09:29 +02:00
|
|
|
*
|
Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite. Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.
To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly. AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.
I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised. Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.
Per bug #13666 reported by Chris Pacejo.
Backpatch to 9.5.
(I would back-patch this all the way back, except that it doesn't apply
cleanly in 9.4 and earlier because 59367fdf9 wasn't backpatched. If we
decide that we need this in earlier branches too, we should backpatch
both.)
2015-12-17 18:25:41 +01:00
|
|
|
* This is used by ALTER TABLE/TYPE OWNER commands, as well as by REASSIGN
|
|
|
|
* OWNED BY. It assumes the caller has done all needed check.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
AlterTypeOwner_oid(Oid typeOid, Oid newOwnerId, bool hasDependEntry)
|
|
|
|
{
|
|
|
|
Relation rel;
|
|
|
|
HeapTuple tup;
|
|
|
|
Form_pg_type typTup;
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(TypeRelationId, RowExclusiveLock);
|
Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite. Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.
To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly. AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.
I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised. Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.
Per bug #13666 reported by Chris Pacejo.
Backpatch to 9.5.
(I would back-patch this all the way back, except that it doesn't apply
cleanly in 9.4 and earlier because 59367fdf9 wasn't backpatched. If we
decide that we need this in earlier branches too, we should backpatch
both.)
2015-12-17 18:25:41 +01:00
|
|
|
|
|
|
|
tup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(typeOid));
|
|
|
|
if (!HeapTupleIsValid(tup))
|
|
|
|
elog(ERROR, "cache lookup failed for type %u", typeOid);
|
|
|
|
typTup = (Form_pg_type) GETSTRUCT(tup);
|
|
|
|
|
|
|
|
/*
|
2016-06-10 00:02:36 +02:00
|
|
|
* If it's a composite type, invoke ATExecChangeOwner so that we fix up
|
|
|
|
* the pg_class entry properly. That will call back to
|
|
|
|
* AlterTypeOwnerInternal to take care of the pg_type entry(s).
|
Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite. Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.
To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly. AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.
I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised. Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.
Per bug #13666 reported by Chris Pacejo.
Backpatch to 9.5.
(I would back-patch this all the way back, except that it doesn't apply
cleanly in 9.4 and earlier because 59367fdf9 wasn't backpatched. If we
decide that we need this in earlier branches too, we should backpatch
both.)
2015-12-17 18:25:41 +01:00
|
|
|
*/
|
|
|
|
if (typTup->typtype == TYPTYPE_COMPOSITE)
|
|
|
|
ATExecChangeOwner(typTup->typrelid, newOwnerId, true, AccessExclusiveLock);
|
|
|
|
else
|
|
|
|
AlterTypeOwnerInternal(typeOid, newOwnerId);
|
|
|
|
|
|
|
|
/* Update owner dependency reference */
|
|
|
|
if (hasDependEntry)
|
|
|
|
changeDependencyOnOwner(TypeRelationId, typeOid, newOwnerId);
|
|
|
|
|
|
|
|
InvokeObjectPostAlterHook(TypeRelationId, typeOid, 0);
|
|
|
|
|
|
|
|
ReleaseSysCache(tup);
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, RowExclusiveLock);
|
Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite. Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.
To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly. AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.
I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised. Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.
Per bug #13666 reported by Chris Pacejo.
Backpatch to 9.5.
(I would back-patch this all the way back, except that it doesn't apply
cleanly in 9.4 and earlier because 59367fdf9 wasn't backpatched. If we
decide that we need this in earlier branches too, we should backpatch
both.)
2015-12-17 18:25:41 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* AlterTypeOwnerInternal - bare-bones type owner change.
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
*
|
Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite. Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.
To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly. AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.
I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised. Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.
Per bug #13666 reported by Chris Pacejo.
Backpatch to 9.5.
(I would back-patch this all the way back, except that it doesn't apply
cleanly in 9.4 and earlier because 59367fdf9 wasn't backpatched. If we
decide that we need this in earlier branches too, we should backpatch
both.)
2015-12-17 18:25:41 +01:00
|
|
|
* This routine simply modifies the owner of a pg_type entry, and recurses
|
|
|
|
* to handle a possible array type.
|
2005-08-04 03:09:29 +02:00
|
|
|
*/
|
|
|
|
void
|
Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite. Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.
To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly. AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.
I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised. Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.
Per bug #13666 reported by Chris Pacejo.
Backpatch to 9.5.
(I would back-patch this all the way back, except that it doesn't apply
cleanly in 9.4 and earlier because 59367fdf9 wasn't backpatched. If we
decide that we need this in earlier branches too, we should backpatch
both.)
2015-12-17 18:25:41 +01:00
|
|
|
AlterTypeOwnerInternal(Oid typeOid, Oid newOwnerId)
|
2005-08-04 03:09:29 +02:00
|
|
|
{
|
|
|
|
Relation rel;
|
|
|
|
HeapTuple tup;
|
|
|
|
Form_pg_type typTup;
|
2015-01-22 18:36:34 +01:00
|
|
|
Datum repl_val[Natts_pg_type];
|
|
|
|
bool repl_null[Natts_pg_type];
|
|
|
|
bool repl_repl[Natts_pg_type];
|
|
|
|
Acl *newAcl;
|
|
|
|
Datum aclDatum;
|
|
|
|
bool isNull;
|
2005-08-04 03:09:29 +02:00
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(TypeRelationId, RowExclusiveLock);
|
2005-08-04 03:09:29 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCacheCopy1(TYPEOID, ObjectIdGetDatum(typeOid));
|
2005-08-04 03:09:29 +02:00
|
|
|
if (!HeapTupleIsValid(tup))
|
|
|
|
elog(ERROR, "cache lookup failed for type %u", typeOid);
|
|
|
|
typTup = (Form_pg_type) GETSTRUCT(tup);
|
|
|
|
|
2015-01-22 18:36:34 +01:00
|
|
|
memset(repl_null, false, sizeof(repl_null));
|
|
|
|
memset(repl_repl, false, sizeof(repl_repl));
|
|
|
|
|
|
|
|
repl_repl[Anum_pg_type_typowner - 1] = true;
|
|
|
|
repl_val[Anum_pg_type_typowner - 1] = ObjectIdGetDatum(newOwnerId);
|
|
|
|
|
|
|
|
aclDatum = heap_getattr(tup,
|
|
|
|
Anum_pg_type_typacl,
|
|
|
|
RelationGetDescr(rel),
|
|
|
|
&isNull);
|
|
|
|
/* Null ACLs do not require changes */
|
|
|
|
if (!isNull)
|
|
|
|
{
|
|
|
|
newAcl = aclnewowner(DatumGetAclP(aclDatum),
|
|
|
|
typTup->typowner, newOwnerId);
|
|
|
|
repl_repl[Anum_pg_type_typacl - 1] = true;
|
|
|
|
repl_val[Anum_pg_type_typacl - 1] = PointerGetDatum(newAcl);
|
|
|
|
}
|
|
|
|
|
|
|
|
tup = heap_modify_tuple(tup, RelationGetDescr(rel), repl_val, repl_null,
|
|
|
|
repl_repl);
|
2005-08-04 03:09:29 +02:00
|
|
|
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(rel, &tup->t_self, tup);
|
2005-08-04 03:09:29 +02:00
|
|
|
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
/* If it has an array type, update that too */
|
|
|
|
if (OidIsValid(typTup->typarray))
|
Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite. Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.
To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly. AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.
I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised. Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.
Per bug #13666 reported by Chris Pacejo.
Backpatch to 9.5.
(I would back-patch this all the way back, except that it doesn't apply
cleanly in 9.4 and earlier because 59367fdf9 wasn't backpatched. If we
decide that we need this in earlier branches too, we should backpatch
both.)
2015-12-17 18:25:41 +01:00
|
|
|
AlterTypeOwnerInternal(typTup->typarray, newOwnerId);
|
2013-03-18 03:55:14 +01:00
|
|
|
|
2005-08-04 03:09:29 +02:00
|
|
|
/* Clean up */
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, RowExclusiveLock);
|
2005-08-04 03:09:29 +02:00
|
|
|
}
|
|
|
|
|
2005-08-01 06:03:59 +02:00
|
|
|
/*
|
|
|
|
* Execute ALTER TYPE SET SCHEMA
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
|
|
|
AlterTypeNamespace(List *names, const char *newschema, ObjectType objecttype,
|
|
|
|
Oid *oldschema)
|
2005-08-01 06:03:59 +02:00
|
|
|
{
|
2005-10-15 04:49:52 +02:00
|
|
|
TypeName *typename;
|
|
|
|
Oid typeOid;
|
|
|
|
Oid nspOid;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
Oid oldNspOid;
|
2013-05-29 22:58:43 +02:00
|
|
|
ObjectAddresses *objsMoved;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress myself;
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2006-03-14 23:48:25 +01:00
|
|
|
/* Make a TypeName so we can use standard type lookup machinery */
|
|
|
|
typename = makeTypeNameFromNameList(names);
|
2010-10-25 20:40:46 +02:00
|
|
|
typeOid = typenameTypeId(NULL, typename);
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2012-01-27 20:20:34 +01:00
|
|
|
/* Don't allow ALTER DOMAIN on a type */
|
|
|
|
if (objecttype == OBJECT_DOMAIN && get_typtype(typeOid) != TYPTYPE_DOMAIN)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("%s is not a domain",
|
|
|
|
format_type_be(typeOid))));
|
|
|
|
|
2011-02-08 22:08:41 +01:00
|
|
|
/* get schema OID and check its permissions */
|
|
|
|
nspOid = LookupCreationNamespace(newschema);
|
|
|
|
|
2012-10-31 14:52:55 +01:00
|
|
|
objsMoved = new_object_addresses();
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
oldNspOid = AlterTypeNamespace_oid(typeOid, nspOid, objsMoved);
|
2012-10-31 14:52:55 +01:00
|
|
|
free_object_addresses(objsMoved);
|
2012-12-24 00:25:03 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
if (oldschema)
|
|
|
|
*oldschema = oldNspOid;
|
|
|
|
|
|
|
|
ObjectAddressSet(myself, TypeRelationId, typeOid);
|
|
|
|
|
|
|
|
return myself;
|
2011-02-08 22:08:41 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
Oid
|
2012-10-31 14:52:55 +01:00
|
|
|
AlterTypeNamespace_oid(Oid typeOid, Oid nspOid, ObjectAddresses *objsMoved)
|
2011-02-08 22:08:41 +01:00
|
|
|
{
|
|
|
|
Oid elemOid;
|
|
|
|
|
2005-08-01 06:03:59 +02:00
|
|
|
/* check permissions on type */
|
|
|
|
if (!pg_type_ownercheck(typeOid, GetUserId()))
|
2012-06-15 21:55:03 +02:00
|
|
|
aclcheck_error_type(ACLCHECK_NOT_OWNER, typeOid);
|
2005-08-01 06:03:59 +02:00
|
|
|
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
/* don't allow direct alteration of array types */
|
|
|
|
elemOid = get_element_type(typeOid);
|
|
|
|
if (OidIsValid(elemOid) && get_array_type(elemOid) == typeOid)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("cannot alter array type %s",
|
|
|
|
format_type_be(typeOid)),
|
|
|
|
errhint("You can alter type %s, which will alter the array type as well.",
|
|
|
|
format_type_be(elemOid))));
|
|
|
|
|
2005-08-01 06:03:59 +02:00
|
|
|
/* and do the work */
|
2012-10-31 14:52:55 +01:00
|
|
|
return AlterTypeNamespaceInternal(typeOid, nspOid, false, true, objsMoved);
|
2005-08-01 06:03:59 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Move specified type to new namespace.
|
|
|
|
*
|
|
|
|
* Caller must have already checked privileges.
|
|
|
|
*
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
* The function automatically recurses to process the type's array type,
|
2017-08-16 06:22:32 +02:00
|
|
|
* if any. isImplicitArray should be true only when doing this internal
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
* recursion (outside callers must never try to move an array type directly).
|
|
|
|
*
|
2017-08-16 06:22:32 +02:00
|
|
|
* If errorOnTableType is true, the function errors out if the type is
|
2005-08-01 06:03:59 +02:00
|
|
|
* a table type. ALTER TABLE has to be used to move a table to a new
|
|
|
|
* namespace.
|
2011-02-08 22:08:41 +01:00
|
|
|
*
|
|
|
|
* Returns the type's old namespace OID.
|
2005-08-01 06:03:59 +02:00
|
|
|
*/
|
2011-02-08 22:08:41 +01:00
|
|
|
Oid
|
2005-08-01 06:03:59 +02:00
|
|
|
AlterTypeNamespaceInternal(Oid typeOid, Oid nspOid,
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
bool isImplicitArray,
|
2012-10-31 14:52:55 +01:00
|
|
|
bool errorOnTableType,
|
|
|
|
ObjectAddresses *objsMoved)
|
2005-08-01 06:03:59 +02:00
|
|
|
{
|
|
|
|
Relation rel;
|
|
|
|
HeapTuple tup;
|
|
|
|
Form_pg_type typform;
|
|
|
|
Oid oldNspOid;
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
Oid arrayOid;
|
2005-08-01 06:03:59 +02:00
|
|
|
bool isCompositeType;
|
2012-10-31 14:52:55 +01:00
|
|
|
ObjectAddress thisobj;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Make sure we haven't moved this object previously.
|
|
|
|
*/
|
|
|
|
thisobj.classId = TypeRelationId;
|
|
|
|
thisobj.objectId = typeOid;
|
|
|
|
thisobj.objectSubId = 0;
|
|
|
|
|
|
|
|
if (object_address_present(&thisobj, objsMoved))
|
|
|
|
return InvalidOid;
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(TypeRelationId, RowExclusiveLock);
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCacheCopy1(TYPEOID, ObjectIdGetDatum(typeOid));
|
2005-08-01 06:03:59 +02:00
|
|
|
if (!HeapTupleIsValid(tup))
|
|
|
|
elog(ERROR, "cache lookup failed for type %u", typeOid);
|
|
|
|
typform = (Form_pg_type) GETSTRUCT(tup);
|
|
|
|
|
|
|
|
oldNspOid = typform->typnamespace;
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
arrayOid = typform->typarray;
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2015-11-19 16:49:25 +01:00
|
|
|
/* If the type is already there, we scan skip these next few checks. */
|
|
|
|
if (oldNspOid != nspOid)
|
|
|
|
{
|
|
|
|
/* common checks on switching namespaces */
|
|
|
|
CheckSetNamespace(oldNspOid, nspOid);
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2015-11-19 16:49:25 +01:00
|
|
|
/* check for duplicate name (more friendly than unique-index failure) */
|
|
|
|
if (SearchSysCacheExists2(TYPENAMENSP,
|
2016-09-13 23:17:48 +02:00
|
|
|
NameGetDatum(&typform->typname),
|
2015-11-19 16:49:25 +01:00
|
|
|
ObjectIdGetDatum(nspOid)))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
|
|
|
errmsg("type \"%s\" already exists in schema \"%s\"",
|
|
|
|
NameStr(typform->typname),
|
|
|
|
get_namespace_name(nspOid))));
|
|
|
|
}
|
2005-08-01 06:03:59 +02:00
|
|
|
|
|
|
|
/* Detect whether type is a composite type (but not a table rowtype) */
|
|
|
|
isCompositeType =
|
2007-04-02 05:49:42 +02:00
|
|
|
(typform->typtype == TYPTYPE_COMPOSITE &&
|
2005-08-01 06:03:59 +02:00
|
|
|
get_rel_relkind(typform->typrelid) == RELKIND_COMPOSITE_TYPE);
|
|
|
|
|
|
|
|
/* Enforce not-table-type if requested */
|
2007-04-02 05:49:42 +02:00
|
|
|
if (typform->typtype == TYPTYPE_COMPOSITE && !isCompositeType &&
|
|
|
|
errorOnTableType)
|
2005-08-01 06:03:59 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("%s is a table's row type",
|
|
|
|
format_type_be(typeOid)),
|
2007-09-29 19:18:58 +02:00
|
|
|
errhint("Use ALTER TABLE instead.")));
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2015-11-19 16:49:25 +01:00
|
|
|
if (oldNspOid != nspOid)
|
|
|
|
{
|
|
|
|
/* OK, modify the pg_type row */
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2015-11-19 16:49:25 +01:00
|
|
|
/* tup is a copy, so we can scribble directly on it */
|
|
|
|
typform->typnamespace = nspOid;
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(rel, &tup->t_self, tup);
|
2015-11-19 16:49:25 +01:00
|
|
|
}
|
2005-08-01 06:03:59 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Composite types have pg_class entries.
|
|
|
|
*
|
2005-10-15 04:49:52 +02:00
|
|
|
* We need to modify the pg_class tuple as well to reflect the change of
|
|
|
|
* schema.
|
2005-08-01 06:03:59 +02:00
|
|
|
*/
|
|
|
|
if (isCompositeType)
|
|
|
|
{
|
2005-10-15 04:49:52 +02:00
|
|
|
Relation classRel;
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
classRel = table_open(RelationRelationId, RowExclusiveLock);
|
2005-08-01 06:03:59 +02:00
|
|
|
|
|
|
|
AlterRelationNamespaceInternal(classRel, typform->typrelid,
|
|
|
|
oldNspOid, nspOid,
|
2012-10-31 14:52:55 +01:00
|
|
|
false, objsMoved);
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(classRel, RowExclusiveLock);
|
2005-08-01 06:03:59 +02:00
|
|
|
|
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Check for constraints associated with the composite type (we don't
|
|
|
|
* currently support this, but probably will someday).
|
2005-08-01 06:03:59 +02:00
|
|
|
*/
|
|
|
|
AlterConstraintNamespaces(typform->typrelid, oldNspOid,
|
2012-10-31 14:52:55 +01:00
|
|
|
nspOid, false, objsMoved);
|
2005-08-01 06:03:59 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* If it's a domain, it might have constraints */
|
2007-04-02 05:49:42 +02:00
|
|
|
if (typform->typtype == TYPTYPE_DOMAIN)
|
2012-10-31 14:52:55 +01:00
|
|
|
AlterConstraintNamespaces(typeOid, oldNspOid, nspOid, true,
|
|
|
|
objsMoved);
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
}
|
2005-08-01 06:03:59 +02:00
|
|
|
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
/*
|
|
|
|
* Update dependency on schema, if any --- a table rowtype has not got
|
|
|
|
* one, and neither does an implicit array.
|
|
|
|
*/
|
2015-11-19 16:49:25 +01:00
|
|
|
if (oldNspOid != nspOid &&
|
|
|
|
(isCompositeType || typform->typtype != TYPTYPE_COMPOSITE) &&
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
!isImplicitArray)
|
|
|
|
if (changeDependencyFor(TypeRelationId, typeOid,
|
2005-10-15 04:49:52 +02:00
|
|
|
NamespaceRelationId, oldNspOid, nspOid) != 1)
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
elog(ERROR, "failed to change schema dependency for type %s",
|
|
|
|
format_type_be(typeOid));
|
2005-08-01 06:03:59 +02:00
|
|
|
|
2013-03-18 03:55:14 +01:00
|
|
|
InvokeObjectPostAlterHook(TypeRelationId, typeOid, 0);
|
|
|
|
|
2005-08-01 06:03:59 +02:00
|
|
|
heap_freetuple(tup);
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, RowExclusiveLock);
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
|
2012-10-31 14:52:55 +01:00
|
|
|
add_exact_object_address(&thisobj, objsMoved);
|
|
|
|
|
Support arrays of composite types, including the rowtypes of regular tables
and views (but not system catalogs, nor sequences or toast tables). Get rid
of the hardwired convention that a type's array type is named exactly "_type",
instead using a new column pg_type.typarray to provide the linkage. (It still
will be named "_type", though, except in odd corner cases such as
maximum-length type names.)
Along the way, make tracking of owner and schema dependencies for types more
uniform: a type directly created by the user has these dependencies, while a
table rowtype or auto-generated array type does not have them, but depends on
its parent object instead.
David Fetter, Andrew Dunstan, Tom Lane
2007-05-11 19:57:14 +02:00
|
|
|
/* Recursively alter the associated array type, if any */
|
|
|
|
if (OidIsValid(arrayOid))
|
2012-10-31 14:52:55 +01:00
|
|
|
AlterTypeNamespaceInternal(arrayOid, nspOid, true, true, objsMoved);
|
2011-02-08 22:08:41 +01:00
|
|
|
|
|
|
|
return oldNspOid;
|
2005-08-01 06:03:59 +02:00
|
|
|
}
|
2020-03-06 18:19:29 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* AlterType
|
|
|
|
* ALTER TYPE <type> SET (option = ...)
|
|
|
|
*
|
|
|
|
* NOTE: the set of changes that can be allowed here is constrained by many
|
|
|
|
* non-obvious implementation restrictions. Tread carefully when considering
|
|
|
|
* adding new flexibility.
|
|
|
|
*/
|
|
|
|
ObjectAddress
|
|
|
|
AlterType(AlterTypeStmt *stmt)
|
|
|
|
{
|
|
|
|
ObjectAddress address;
|
|
|
|
Relation catalog;
|
|
|
|
TypeName *typename;
|
|
|
|
HeapTuple tup;
|
|
|
|
Oid typeOid;
|
|
|
|
Form_pg_type typForm;
|
|
|
|
bool requireSuper = false;
|
|
|
|
AlterTypeRecurseParams atparams;
|
|
|
|
ListCell *pl;
|
|
|
|
|
|
|
|
catalog = table_open(TypeRelationId, RowExclusiveLock);
|
|
|
|
|
|
|
|
/* Make a TypeName so we can use standard type lookup machinery */
|
|
|
|
typename = makeTypeNameFromNameList(stmt->typeName);
|
|
|
|
tup = typenameType(NULL, typename, NULL);
|
|
|
|
|
|
|
|
typeOid = typeTypeId(tup);
|
|
|
|
typForm = (Form_pg_type) GETSTRUCT(tup);
|
|
|
|
|
|
|
|
/* Process options */
|
|
|
|
memset(&atparams, 0, sizeof(atparams));
|
|
|
|
foreach(pl, stmt->options)
|
|
|
|
{
|
|
|
|
DefElem *defel = (DefElem *) lfirst(pl);
|
|
|
|
|
|
|
|
if (strcmp(defel->defname, "storage") == 0)
|
|
|
|
{
|
|
|
|
char *a = defGetString(defel);
|
|
|
|
|
|
|
|
if (pg_strcasecmp(a, "plain") == 0)
|
|
|
|
atparams.storage = TYPSTORAGE_PLAIN;
|
|
|
|
else if (pg_strcasecmp(a, "external") == 0)
|
|
|
|
atparams.storage = TYPSTORAGE_EXTERNAL;
|
|
|
|
else if (pg_strcasecmp(a, "extended") == 0)
|
|
|
|
atparams.storage = TYPSTORAGE_EXTENDED;
|
|
|
|
else if (pg_strcasecmp(a, "main") == 0)
|
|
|
|
atparams.storage = TYPSTORAGE_MAIN;
|
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("storage \"%s\" not recognized", a)));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Validate the storage request. If the type isn't varlena, it
|
|
|
|
* certainly doesn't support non-PLAIN storage.
|
|
|
|
*/
|
|
|
|
if (atparams.storage != TYPSTORAGE_PLAIN && typForm->typlen != -1)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("fixed-size types must have storage PLAIN")));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Switching from PLAIN to non-PLAIN is allowed, but it requires
|
|
|
|
* superuser, since we can't validate that the type's C functions
|
|
|
|
* will support it. Switching from non-PLAIN to PLAIN is
|
|
|
|
* disallowed outright, because it's not practical to ensure that
|
|
|
|
* no tables have toasted values of the type. Switching among
|
|
|
|
* different non-PLAIN settings is OK, since it just constitutes a
|
|
|
|
* change in the strategy requested for columns created in the
|
|
|
|
* future.
|
|
|
|
*/
|
|
|
|
if (atparams.storage != TYPSTORAGE_PLAIN &&
|
|
|
|
typForm->typstorage == TYPSTORAGE_PLAIN)
|
|
|
|
requireSuper = true;
|
|
|
|
else if (atparams.storage == TYPSTORAGE_PLAIN &&
|
|
|
|
typForm->typstorage != TYPSTORAGE_PLAIN)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
|
|
|
|
errmsg("cannot change type's storage to PLAIN")));
|
|
|
|
|
|
|
|
atparams.updateStorage = true;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "receive") == 0)
|
|
|
|
{
|
|
|
|
if (defel->arg != NULL)
|
|
|
|
atparams.receiveOid =
|
|
|
|
findTypeReceiveFunction(defGetQualifiedName(defel),
|
|
|
|
typeOid);
|
|
|
|
else
|
|
|
|
atparams.receiveOid = InvalidOid; /* NONE, remove function */
|
|
|
|
atparams.updateReceive = true;
|
|
|
|
/* Replacing an I/O function requires superuser. */
|
|
|
|
requireSuper = true;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "send") == 0)
|
|
|
|
{
|
|
|
|
if (defel->arg != NULL)
|
|
|
|
atparams.sendOid =
|
|
|
|
findTypeSendFunction(defGetQualifiedName(defel),
|
|
|
|
typeOid);
|
|
|
|
else
|
|
|
|
atparams.sendOid = InvalidOid; /* NONE, remove function */
|
|
|
|
atparams.updateSend = true;
|
|
|
|
/* Replacing an I/O function requires superuser. */
|
|
|
|
requireSuper = true;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "typmod_in") == 0)
|
|
|
|
{
|
|
|
|
if (defel->arg != NULL)
|
|
|
|
atparams.typmodinOid =
|
|
|
|
findTypeTypmodinFunction(defGetQualifiedName(defel));
|
|
|
|
else
|
|
|
|
atparams.typmodinOid = InvalidOid; /* NONE, remove function */
|
|
|
|
atparams.updateTypmodin = true;
|
|
|
|
/* Replacing an I/O function requires superuser. */
|
|
|
|
requireSuper = true;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "typmod_out") == 0)
|
|
|
|
{
|
|
|
|
if (defel->arg != NULL)
|
|
|
|
atparams.typmodoutOid =
|
|
|
|
findTypeTypmodoutFunction(defGetQualifiedName(defel));
|
|
|
|
else
|
|
|
|
atparams.typmodoutOid = InvalidOid; /* NONE, remove function */
|
|
|
|
atparams.updateTypmodout = true;
|
|
|
|
/* Replacing an I/O function requires superuser. */
|
|
|
|
requireSuper = true;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "analyze") == 0)
|
|
|
|
{
|
|
|
|
if (defel->arg != NULL)
|
|
|
|
atparams.analyzeOid =
|
|
|
|
findTypeAnalyzeFunction(defGetQualifiedName(defel),
|
|
|
|
typeOid);
|
|
|
|
else
|
|
|
|
atparams.analyzeOid = InvalidOid; /* NONE, remove function */
|
|
|
|
atparams.updateAnalyze = true;
|
|
|
|
/* Replacing an analyze function requires superuser. */
|
|
|
|
requireSuper = true;
|
|
|
|
}
|
2020-12-12 00:07:02 +01:00
|
|
|
else if (strcmp(defel->defname, "subscript") == 0)
|
|
|
|
{
|
|
|
|
if (defel->arg != NULL)
|
|
|
|
atparams.subscriptOid =
|
|
|
|
findTypeSubscriptingFunction(defGetQualifiedName(defel),
|
|
|
|
typeOid);
|
|
|
|
else
|
|
|
|
atparams.subscriptOid = InvalidOid; /* NONE, remove function */
|
|
|
|
atparams.updateSubscript = true;
|
|
|
|
/* Replacing a subscript function requires superuser. */
|
|
|
|
requireSuper = true;
|
|
|
|
}
|
2020-03-06 18:19:29 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The rest of the options that CREATE accepts cannot be changed.
|
|
|
|
* Check for them so that we can give a meaningful error message.
|
|
|
|
*/
|
|
|
|
else if (strcmp(defel->defname, "input") == 0 ||
|
|
|
|
strcmp(defel->defname, "output") == 0 ||
|
|
|
|
strcmp(defel->defname, "internallength") == 0 ||
|
|
|
|
strcmp(defel->defname, "passedbyvalue") == 0 ||
|
|
|
|
strcmp(defel->defname, "alignment") == 0 ||
|
|
|
|
strcmp(defel->defname, "like") == 0 ||
|
|
|
|
strcmp(defel->defname, "category") == 0 ||
|
|
|
|
strcmp(defel->defname, "preferred") == 0 ||
|
|
|
|
strcmp(defel->defname, "default") == 0 ||
|
|
|
|
strcmp(defel->defname, "element") == 0 ||
|
|
|
|
strcmp(defel->defname, "delimiter") == 0 ||
|
|
|
|
strcmp(defel->defname, "collatable") == 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("type attribute \"%s\" cannot be changed",
|
|
|
|
defel->defname)));
|
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("type attribute \"%s\" not recognized",
|
|
|
|
defel->defname)));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Permissions check. Require superuser if we decided the command
|
|
|
|
* requires that, else must own the type.
|
|
|
|
*/
|
|
|
|
if (requireSuper)
|
|
|
|
{
|
|
|
|
if (!superuser())
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
|
|
|
errmsg("must be superuser to alter a type")));
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
if (!pg_type_ownercheck(typeOid, GetUserId()))
|
|
|
|
aclcheck_error_type(ACLCHECK_NOT_OWNER, typeOid);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We disallow all forms of ALTER TYPE SET on types that aren't plain base
|
|
|
|
* types. It would for example be highly unsafe, not to mention
|
|
|
|
* pointless, to change the send/receive functions for a composite type.
|
|
|
|
* Moreover, pg_dump has no support for changing these properties on
|
|
|
|
* non-base types. We might weaken this someday, but not now.
|
|
|
|
*
|
|
|
|
* Note: if you weaken this enough to allow composite types, be sure to
|
|
|
|
* adjust the GenerateTypeDependencies call in AlterTypeRecurse.
|
|
|
|
*/
|
|
|
|
if (typForm->typtype != TYPTYPE_BASE)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("%s is not a base type",
|
|
|
|
format_type_be(typeOid))));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For the same reasons, don't allow direct alteration of array types.
|
|
|
|
*/
|
Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means. Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers. (This patch provides no such new
features, though; it only lays the foundation for them.)
To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler. On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines. (Thus, essentially
no new run-time overhead should be caused by this patch. Indeed,
there is room to remove some overhead by supplying specialized
execution routines. This patch does a little bit in that line,
but more could be done.)
Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.
One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.
This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.
Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule
Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
|
|
|
if (IsTrueArrayType(typForm))
|
2020-03-06 18:19:29 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("%s is not a base type",
|
|
|
|
format_type_be(typeOid))));
|
|
|
|
|
2020-07-31 23:11:28 +02:00
|
|
|
/* OK, recursively update this type and any arrays/domains over it */
|
|
|
|
AlterTypeRecurse(typeOid, false, tup, catalog, &atparams);
|
2020-03-06 18:19:29 +01:00
|
|
|
|
|
|
|
/* Clean up */
|
|
|
|
ReleaseSysCache(tup);
|
|
|
|
|
|
|
|
table_close(catalog, RowExclusiveLock);
|
|
|
|
|
|
|
|
ObjectAddressSet(address, TypeRelationId, typeOid);
|
|
|
|
|
|
|
|
return address;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* AlterTypeRecurse: one recursion step for AlterType()
|
|
|
|
*
|
|
|
|
* Apply the changes specified by "atparams" to the type identified by
|
2020-07-31 23:11:28 +02:00
|
|
|
* "typeOid", whose existing pg_type tuple is "tup". If necessary,
|
|
|
|
* recursively update its array type as well. Then search for any domains
|
|
|
|
* over this type, and recursively apply (most of) the same changes to those
|
|
|
|
* domains.
|
2020-03-06 18:19:29 +01:00
|
|
|
*
|
|
|
|
* We need this because the system generally assumes that a domain inherits
|
|
|
|
* many properties from its base type. See DefineDomain() above for details
|
2020-07-31 23:11:28 +02:00
|
|
|
* of what is inherited. Arrays inherit a smaller number of properties,
|
|
|
|
* but not none.
|
2020-03-06 18:19:29 +01:00
|
|
|
*
|
|
|
|
* There's a race condition here, in that some other transaction could
|
|
|
|
* concurrently add another domain atop this base type; we'd miss updating
|
|
|
|
* that one. Hence, be wary of allowing ALTER TYPE to change properties for
|
|
|
|
* which it'd be really fatal for a domain to be out of sync with its base
|
|
|
|
* type (typlen, for example). In practice, races seem unlikely to be an
|
|
|
|
* issue for plausible use-cases for ALTER TYPE. If one does happen, it could
|
|
|
|
* be fixed by re-doing the same ALTER TYPE once all prior transactions have
|
|
|
|
* committed.
|
|
|
|
*/
|
|
|
|
static void
|
2020-07-31 23:11:28 +02:00
|
|
|
AlterTypeRecurse(Oid typeOid, bool isImplicitArray,
|
|
|
|
HeapTuple tup, Relation catalog,
|
2020-03-06 18:19:29 +01:00
|
|
|
AlterTypeRecurseParams *atparams)
|
|
|
|
{
|
|
|
|
Datum values[Natts_pg_type];
|
|
|
|
bool nulls[Natts_pg_type];
|
|
|
|
bool replaces[Natts_pg_type];
|
|
|
|
HeapTuple newtup;
|
|
|
|
SysScanDesc scan;
|
|
|
|
ScanKeyData key[1];
|
|
|
|
HeapTuple domainTup;
|
|
|
|
|
|
|
|
/* Since this function recurses, it could be driven to stack overflow */
|
|
|
|
check_stack_depth();
|
|
|
|
|
|
|
|
/* Update the current type's tuple */
|
|
|
|
memset(values, 0, sizeof(values));
|
|
|
|
memset(nulls, 0, sizeof(nulls));
|
|
|
|
memset(replaces, 0, sizeof(replaces));
|
|
|
|
|
|
|
|
if (atparams->updateStorage)
|
|
|
|
{
|
|
|
|
replaces[Anum_pg_type_typstorage - 1] = true;
|
|
|
|
values[Anum_pg_type_typstorage - 1] = CharGetDatum(atparams->storage);
|
|
|
|
}
|
|
|
|
if (atparams->updateReceive)
|
|
|
|
{
|
|
|
|
replaces[Anum_pg_type_typreceive - 1] = true;
|
|
|
|
values[Anum_pg_type_typreceive - 1] = ObjectIdGetDatum(atparams->receiveOid);
|
|
|
|
}
|
|
|
|
if (atparams->updateSend)
|
|
|
|
{
|
|
|
|
replaces[Anum_pg_type_typsend - 1] = true;
|
|
|
|
values[Anum_pg_type_typsend - 1] = ObjectIdGetDatum(atparams->sendOid);
|
|
|
|
}
|
|
|
|
if (atparams->updateTypmodin)
|
|
|
|
{
|
|
|
|
replaces[Anum_pg_type_typmodin - 1] = true;
|
|
|
|
values[Anum_pg_type_typmodin - 1] = ObjectIdGetDatum(atparams->typmodinOid);
|
|
|
|
}
|
|
|
|
if (atparams->updateTypmodout)
|
|
|
|
{
|
|
|
|
replaces[Anum_pg_type_typmodout - 1] = true;
|
|
|
|
values[Anum_pg_type_typmodout - 1] = ObjectIdGetDatum(atparams->typmodoutOid);
|
|
|
|
}
|
|
|
|
if (atparams->updateAnalyze)
|
|
|
|
{
|
|
|
|
replaces[Anum_pg_type_typanalyze - 1] = true;
|
|
|
|
values[Anum_pg_type_typanalyze - 1] = ObjectIdGetDatum(atparams->analyzeOid);
|
|
|
|
}
|
2020-12-12 00:07:02 +01:00
|
|
|
if (atparams->updateSubscript)
|
|
|
|
{
|
|
|
|
replaces[Anum_pg_type_typsubscript - 1] = true;
|
|
|
|
values[Anum_pg_type_typsubscript - 1] = ObjectIdGetDatum(atparams->subscriptOid);
|
|
|
|
}
|
2020-03-06 18:19:29 +01:00
|
|
|
|
|
|
|
newtup = heap_modify_tuple(tup, RelationGetDescr(catalog),
|
|
|
|
values, nulls, replaces);
|
|
|
|
|
|
|
|
CatalogTupleUpdate(catalog, &newtup->t_self, newtup);
|
|
|
|
|
|
|
|
/* Rebuild dependencies for this type */
|
|
|
|
GenerateTypeDependencies(newtup,
|
|
|
|
catalog,
|
|
|
|
NULL, /* don't have defaultExpr handy */
|
|
|
|
NULL, /* don't have typacl handy */
|
|
|
|
0, /* we rejected composite types above */
|
2020-07-31 23:11:28 +02:00
|
|
|
isImplicitArray, /* it might be an array */
|
|
|
|
isImplicitArray, /* dependent iff it's array */
|
2020-03-06 18:19:29 +01:00
|
|
|
true);
|
|
|
|
|
|
|
|
InvokeObjectPostAlterHook(TypeRelationId, typeOid, 0);
|
|
|
|
|
2020-07-31 23:11:28 +02:00
|
|
|
/*
|
|
|
|
* Arrays inherit their base type's typmodin and typmodout, but none of
|
|
|
|
* the other properties we're concerned with here. Recurse to the array
|
|
|
|
* type if needed.
|
|
|
|
*/
|
|
|
|
if (!isImplicitArray &&
|
|
|
|
(atparams->updateTypmodin || atparams->updateTypmodout))
|
|
|
|
{
|
|
|
|
Oid arrtypoid = ((Form_pg_type) GETSTRUCT(newtup))->typarray;
|
|
|
|
|
|
|
|
if (OidIsValid(arrtypoid))
|
|
|
|
{
|
|
|
|
HeapTuple arrtup;
|
|
|
|
AlterTypeRecurseParams arrparams;
|
|
|
|
|
|
|
|
arrtup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(arrtypoid));
|
|
|
|
if (!HeapTupleIsValid(arrtup))
|
|
|
|
elog(ERROR, "cache lookup failed for type %u", arrtypoid);
|
|
|
|
|
|
|
|
memset(&arrparams, 0, sizeof(arrparams));
|
|
|
|
arrparams.updateTypmodin = atparams->updateTypmodin;
|
|
|
|
arrparams.updateTypmodout = atparams->updateTypmodout;
|
|
|
|
arrparams.typmodinOid = atparams->typmodinOid;
|
|
|
|
arrparams.typmodoutOid = atparams->typmodoutOid;
|
|
|
|
|
|
|
|
AlterTypeRecurse(arrtypoid, true, arrtup, catalog, &arrparams);
|
|
|
|
|
|
|
|
ReleaseSysCache(arrtup);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-03-06 18:19:29 +01:00
|
|
|
/*
|
|
|
|
* Now we need to recurse to domains. However, some properties are not
|
|
|
|
* inherited by domains, so clear the update flags for those.
|
|
|
|
*/
|
|
|
|
atparams->updateReceive = false; /* domains use F_DOMAIN_RECV */
|
|
|
|
atparams->updateTypmodin = false; /* domains don't have typmods */
|
|
|
|
atparams->updateTypmodout = false;
|
2020-12-12 00:07:02 +01:00
|
|
|
atparams->updateSubscript = false; /* domains don't have subscriptors */
|
2020-03-06 18:19:29 +01:00
|
|
|
|
2020-07-31 23:11:28 +02:00
|
|
|
/* Skip the scan if nothing remains to be done */
|
|
|
|
if (!(atparams->updateStorage ||
|
|
|
|
atparams->updateSend ||
|
|
|
|
atparams->updateAnalyze))
|
|
|
|
return;
|
|
|
|
|
2020-03-06 18:19:29 +01:00
|
|
|
/* Search pg_type for possible domains over this type */
|
|
|
|
ScanKeyInit(&key[0],
|
|
|
|
Anum_pg_type_typbasetype,
|
|
|
|
BTEqualStrategyNumber, F_OIDEQ,
|
|
|
|
ObjectIdGetDatum(typeOid));
|
|
|
|
|
|
|
|
scan = systable_beginscan(catalog, InvalidOid, false,
|
|
|
|
NULL, 1, key);
|
|
|
|
|
|
|
|
while ((domainTup = systable_getnext(scan)) != NULL)
|
|
|
|
{
|
|
|
|
Form_pg_type domainForm = (Form_pg_type) GETSTRUCT(domainTup);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Shouldn't have a nonzero typbasetype in a non-domain, but let's
|
|
|
|
* check
|
|
|
|
*/
|
|
|
|
if (domainForm->typtype != TYPTYPE_DOMAIN)
|
|
|
|
continue;
|
|
|
|
|
2020-07-31 23:11:28 +02:00
|
|
|
AlterTypeRecurse(domainForm->oid, false, domainTup, catalog, atparams);
|
2020-03-06 18:19:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
systable_endscan(scan);
|
|
|
|
}
|