Add UNIQUE null treatment option

The SQL standard has been ambiguous about whether null values in
unique constraints should be considered equal or not.  Different
implementations have different behaviors.  In the SQL:202x draft, this
has been formalized by making this implementation-defined and adding
an option on unique constraint definitions UNIQUE [ NULLS [NOT]
DISTINCT ] to choose a behavior explicitly.

This patch adds this option to PostgreSQL.  The default behavior
remains UNIQUE NULLS DISTINCT.  Making this happen in the btree code
is pretty easy; most of the patch is just to carry the flag around to
all the places that need it.

The CREATE UNIQUE INDEX syntax extension is not from the standard,
it's my own invention.

I named all the internal flags, catalog columns, etc. in the negative
("nulls not distinct") so that the default PostgreSQL behavior is the
default if the flag is false.

Reviewed-by: Maxim Orlov <orlovmg@gmail.com>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/84e5ee1b-387e-9a54-c326-9082674bde78@enterprisedb.com
This commit is contained in:
Peter Eisentraut 2022-02-03 11:29:54 +01:00
parent f862d57057
commit 94aa7cc5f7
36 changed files with 348 additions and 57 deletions

View File

@ -4265,6 +4265,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
</para></entry> </para></entry>
</row> </row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>indnullsnotdistinct</structfield> <type>bool</type>
</para>
<para>
This value is only used for unique indexes. If false, this unique
index will consider null values distinct (so the index can contain
multiple null values in a column, the default PostgreSQL behavior). If
it is true, it will consider null values to be equal (so the index can
only contain one null value in a column).
</para></entry>
</row>
<row> <row>
<entry role="catalog_table_entry"><para role="column_definition"> <entry role="catalog_table_entry"><para role="column_definition">
<structfield>indisprimary</structfield> <type>bool</type> <structfield>indisprimary</structfield> <type>bool</type>

View File

@ -759,14 +759,33 @@ CREATE TABLE products (
In general, a unique constraint is violated if there is more than In general, a unique constraint is violated if there is more than
one row in the table where the values of all of the one row in the table where the values of all of the
columns included in the constraint are equal. columns included in the constraint are equal.
However, two null values are never considered equal in this By default, two null values are not considered equal in this
comparison. That means even in the presence of a comparison. That means even in the presence of a
unique constraint it is possible to store duplicate unique constraint it is possible to store duplicate
rows that contain a null value in at least one of the constrained rows that contain a null value in at least one of the constrained
columns. This behavior conforms to the SQL standard, but we have columns. This behavior can be changed by adding the clause <literal>NULLS
heard that other SQL databases might not follow this rule. So be NOT DISTINCT</literal>, like
careful when developing applications that are intended to be <programlisting>
portable. CREATE TABLE products (
product_no integer UNIQUE <emphasis>NULLS NOT DISTINCT</emphasis>,
name text,
price numeric
);
</programlisting>
or
<programlisting>
CREATE TABLE products (
product_no integer,
name text,
price numeric,
UNIQUE <emphasis>NULLS NOT DISTINCT</emphasis> (product_no)
);
</programlisting>
The default behavior can be specified explicitly using <literal>NULLS
DISTINCT</literal>. The default null treatment in unique constraints is
implementation-defined according to the SQL standard, and other
implementations have a different behavior. So be careful when developing
applications that are intended to be portable.
</para> </para>
</sect2> </sect2>

View File

@ -6899,6 +6899,18 @@ ORDER BY c.ordinal_position;
<literal>YES</literal>) <literal>YES</literal>)
</para></entry> </para></entry>
</row> </row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>nulls_distinct</structfield> <type>yes_or_no</type>
</para>
<para>
If the constraint is a unique constraint, then <literal>YES</literal>
if the constraint treats nulls as distinct or <literal>NO</literal> if
it treats nulls as not distinct, otherwise null for other types of
constraints.
</para></entry>
</row>
</tbody> </tbody>
</tgroup> </tgroup>
</table> </table>

View File

@ -103,7 +103,7 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
DEFAULT <replaceable>default_expr</replaceable> | DEFAULT <replaceable>default_expr</replaceable> |
GENERATED ALWAYS AS ( <replaceable>generation_expr</replaceable> ) STORED | GENERATED ALWAYS AS ( <replaceable>generation_expr</replaceable> ) STORED |
GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( <replaceable>sequence_options</replaceable> ) ] | GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( <replaceable>sequence_options</replaceable> ) ] |
UNIQUE <replaceable class="parameter">index_parameters</replaceable> | UNIQUE [ NULLS [ NOT ] DISTINCT ] <replaceable class="parameter">index_parameters</replaceable> |
PRIMARY KEY <replaceable class="parameter">index_parameters</replaceable> | PRIMARY KEY <replaceable class="parameter">index_parameters</replaceable> |
REFERENCES <replaceable class="parameter">reftable</replaceable> [ ( <replaceable class="parameter">refcolumn</replaceable> ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] REFERENCES <replaceable class="parameter">reftable</replaceable> [ ( <replaceable class="parameter">refcolumn</replaceable> ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ]
[ ON DELETE <replaceable class="parameter">referential_action</replaceable> ] [ ON UPDATE <replaceable class="parameter">referential_action</replaceable> ] } [ ON DELETE <replaceable class="parameter">referential_action</replaceable> ] [ ON UPDATE <replaceable class="parameter">referential_action</replaceable> ] }
@ -113,7 +113,7 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
[ CONSTRAINT <replaceable class="parameter">constraint_name</replaceable> ] [ CONSTRAINT <replaceable class="parameter">constraint_name</replaceable> ]
{ CHECK ( <replaceable class="parameter">expression</replaceable> ) [ NO INHERIT ] | { CHECK ( <replaceable class="parameter">expression</replaceable> ) [ NO INHERIT ] |
UNIQUE ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) <replaceable class="parameter">index_parameters</replaceable> | UNIQUE [ NULLS [ NOT ] DISTINCT ] ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) <replaceable class="parameter">index_parameters</replaceable> |
PRIMARY KEY ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) <replaceable class="parameter">index_parameters</replaceable> | PRIMARY KEY ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) <replaceable class="parameter">index_parameters</replaceable> |
EXCLUDE [ USING <replaceable class="parameter">index_method</replaceable> ] ( <replaceable class="parameter">exclude_element</replaceable> WITH <replaceable class="parameter">operator</replaceable> [, ... ] ) <replaceable class="parameter">index_parameters</replaceable> [ WHERE ( <replaceable class="parameter">predicate</replaceable> ) ] | EXCLUDE [ USING <replaceable class="parameter">index_method</replaceable> ] ( <replaceable class="parameter">exclude_element</replaceable> WITH <replaceable class="parameter">operator</replaceable> [, ... ] ) <replaceable class="parameter">index_parameters</replaceable> [ WHERE ( <replaceable class="parameter">predicate</replaceable> ) ] |
FOREIGN KEY ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) REFERENCES <replaceable class="parameter">reftable</replaceable> [ ( <replaceable class="parameter">refcolumn</replaceable> [, ... ] ) ] FOREIGN KEY ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) REFERENCES <replaceable class="parameter">reftable</replaceable> [ ( <replaceable class="parameter">refcolumn</replaceable> [, ... ] ) ]

View File

@ -24,6 +24,7 @@ PostgreSQL documentation
CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] <replaceable class="parameter">name</replaceable> ] ON [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ USING <replaceable class="parameter">method</replaceable> ] CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] <replaceable class="parameter">name</replaceable> ] ON [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ USING <replaceable class="parameter">method</replaceable> ]
( { <replaceable class="parameter">column_name</replaceable> | ( <replaceable class="parameter">expression</replaceable> ) } [ COLLATE <replaceable class="parameter">collation</replaceable> ] [ <replaceable class="parameter">opclass</replaceable> [ ( <replaceable class="parameter">opclass_parameter</replaceable> = <replaceable class="parameter">value</replaceable> [, ... ] ) ] ] [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [, ...] ) ( { <replaceable class="parameter">column_name</replaceable> | ( <replaceable class="parameter">expression</replaceable> ) } [ COLLATE <replaceable class="parameter">collation</replaceable> ] [ <replaceable class="parameter">opclass</replaceable> [ ( <replaceable class="parameter">opclass_parameter</replaceable> = <replaceable class="parameter">value</replaceable> [, ... ] ) ] ] [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [, ...] )
[ INCLUDE ( <replaceable class="parameter">column_name</replaceable> [, ...] ) ] [ INCLUDE ( <replaceable class="parameter">column_name</replaceable> [, ...] ) ]
[ NULLS [ NOT ] DISTINCT ]
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ] [ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ] [ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
[ WHERE <replaceable class="parameter">predicate</replaceable> ] [ WHERE <replaceable class="parameter">predicate</replaceable> ]
@ -334,6 +335,18 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] <replaceable class=
</listitem> </listitem>
</varlistentry> </varlistentry>
<varlistentry>
<term><literal>NULLS DISTINCT</literal></term>
<term><literal>NULLS NOT DISTINCT</literal></term>
<listitem>
<para>
Specifies whether for a unique index, null values should be considered
distinct (not equal). The default is that they are distinct, so that
a unique index could contain multiple null values in a column.
</para>
</listitem>
</varlistentry>
<varlistentry> <varlistentry>
<term><replaceable class="parameter">storage_parameter</replaceable></term> <term><replaceable class="parameter">storage_parameter</replaceable></term>
<listitem> <listitem>

View File

@ -67,7 +67,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
DEFAULT <replaceable>default_expr</replaceable> | DEFAULT <replaceable>default_expr</replaceable> |
GENERATED ALWAYS AS ( <replaceable>generation_expr</replaceable> ) STORED | GENERATED ALWAYS AS ( <replaceable>generation_expr</replaceable> ) STORED |
GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( <replaceable>sequence_options</replaceable> ) ] | GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( <replaceable>sequence_options</replaceable> ) ] |
UNIQUE <replaceable class="parameter">index_parameters</replaceable> | UNIQUE [ NULLS [ NOT ] DISTINCT ] <replaceable class="parameter">index_parameters</replaceable> |
PRIMARY KEY <replaceable class="parameter">index_parameters</replaceable> | PRIMARY KEY <replaceable class="parameter">index_parameters</replaceable> |
REFERENCES <replaceable class="parameter">reftable</replaceable> [ ( <replaceable class="parameter">refcolumn</replaceable> ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] REFERENCES <replaceable class="parameter">reftable</replaceable> [ ( <replaceable class="parameter">refcolumn</replaceable> ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ]
[ ON DELETE <replaceable class="parameter">referential_action</replaceable> ] [ ON UPDATE <replaceable class="parameter">referential_action</replaceable> ] } [ ON DELETE <replaceable class="parameter">referential_action</replaceable> ] [ ON UPDATE <replaceable class="parameter">referential_action</replaceable> ] }
@ -77,7 +77,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ CONSTRAINT <replaceable class="parameter">constraint_name</replaceable> ] [ CONSTRAINT <replaceable class="parameter">constraint_name</replaceable> ]
{ CHECK ( <replaceable class="parameter">expression</replaceable> ) [ NO INHERIT ] | { CHECK ( <replaceable class="parameter">expression</replaceable> ) [ NO INHERIT ] |
UNIQUE ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) <replaceable class="parameter">index_parameters</replaceable> | UNIQUE [ NULLS [ NOT ] DISTINCT ] ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) <replaceable class="parameter">index_parameters</replaceable> |
PRIMARY KEY ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) <replaceable class="parameter">index_parameters</replaceable> | PRIMARY KEY ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) <replaceable class="parameter">index_parameters</replaceable> |
EXCLUDE [ USING <replaceable class="parameter">index_method</replaceable> ] ( <replaceable class="parameter">exclude_element</replaceable> WITH <replaceable class="parameter">operator</replaceable> [, ... ] ) <replaceable class="parameter">index_parameters</replaceable> [ WHERE ( <replaceable class="parameter">predicate</replaceable> ) ] | EXCLUDE [ USING <replaceable class="parameter">index_method</replaceable> ] ( <replaceable class="parameter">exclude_element</replaceable> WITH <replaceable class="parameter">operator</replaceable> [, ... ] ) <replaceable class="parameter">index_parameters</replaceable> [ WHERE ( <replaceable class="parameter">predicate</replaceable> ) ] |
FOREIGN KEY ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) REFERENCES <replaceable class="parameter">reftable</replaceable> [ ( <replaceable class="parameter">refcolumn</replaceable> [, ... ] ) ] FOREIGN KEY ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) REFERENCES <replaceable class="parameter">reftable</replaceable> [ ( <replaceable class="parameter">refcolumn</replaceable> [, ... ] ) ]
@ -917,8 +917,8 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
<term><literal>UNIQUE</literal> (column constraint)</term> <term><literal>UNIQUE [ NULLS [ NOT ] DISTINCT ]</literal> (column constraint)</term>
<term><literal>UNIQUE ( <replaceable class="parameter">column_name</replaceable> [, ... ] )</literal> <term><literal>UNIQUE [ NULLS [ NOT ] DISTINCT ] ( <replaceable class="parameter">column_name</replaceable> [, ... ] )</literal>
<optional> <literal>INCLUDE ( <replaceable class="parameter">column_name</replaceable> [, ...])</literal> </optional> (table constraint)</term> <optional> <literal>INCLUDE ( <replaceable class="parameter">column_name</replaceable> [, ...])</literal> </optional> (table constraint)</term>
<listitem> <listitem>
@ -934,7 +934,8 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
<para> <para>
For the purpose of a unique constraint, null values are not For the purpose of a unique constraint, null values are not
considered equal. considered equal, unless <literal>NULLS NOT DISTINCT</literal> is
specified.
</para> </para>
<para> <para>

View File

@ -398,9 +398,9 @@ _bt_search_insert(Relation rel, BTInsertState insertstate)
* _bt_findinsertloc() to reuse most of the binary search work we do * _bt_findinsertloc() to reuse most of the binary search work we do
* here. * here.
* *
* Do not call here when there are NULL values in scan key. NULL should be * This code treats NULLs as equal, unlike the default semantics for unique
* considered unequal to NULL when checking for duplicates, but we are not * indexes. So do not call here when there are NULL values in scan key and
* prepared to handle that correctly. * the index uses the default NULLS DISTINCT mode.
*/ */
static TransactionId static TransactionId
_bt_check_unique(Relation rel, BTInsertState insertstate, Relation heapRel, _bt_check_unique(Relation rel, BTInsertState insertstate, Relation heapRel,

View File

@ -89,6 +89,7 @@ typedef struct BTSpool
Relation heap; Relation heap;
Relation index; Relation index;
bool isunique; bool isunique;
bool nulls_not_distinct;
} BTSpool; } BTSpool;
/* /*
@ -106,6 +107,7 @@ typedef struct BTShared
Oid heaprelid; Oid heaprelid;
Oid indexrelid; Oid indexrelid;
bool isunique; bool isunique;
bool nulls_not_distinct;
bool isconcurrent; bool isconcurrent;
int scantuplesortstates; int scantuplesortstates;
@ -206,6 +208,7 @@ typedef struct BTLeader
typedef struct BTBuildState typedef struct BTBuildState
{ {
bool isunique; bool isunique;
bool nulls_not_distinct;
bool havedead; bool havedead;
Relation heap; Relation heap;
BTSpool *spool; BTSpool *spool;
@ -307,6 +310,7 @@ btbuild(Relation heap, Relation index, IndexInfo *indexInfo)
#endif /* BTREE_BUILD_STATS */ #endif /* BTREE_BUILD_STATS */
buildstate.isunique = indexInfo->ii_Unique; buildstate.isunique = indexInfo->ii_Unique;
buildstate.nulls_not_distinct = indexInfo->ii_NullsNotDistinct;
buildstate.havedead = false; buildstate.havedead = false;
buildstate.heap = heap; buildstate.heap = heap;
buildstate.spool = NULL; buildstate.spool = NULL;
@ -380,6 +384,7 @@ _bt_spools_heapscan(Relation heap, Relation index, BTBuildState *buildstate,
btspool->heap = heap; btspool->heap = heap;
btspool->index = index; btspool->index = index;
btspool->isunique = indexInfo->ii_Unique; btspool->isunique = indexInfo->ii_Unique;
btspool->nulls_not_distinct = indexInfo->ii_NullsNotDistinct;
/* Save as primary spool */ /* Save as primary spool */
buildstate->spool = btspool; buildstate->spool = btspool;
@ -429,6 +434,7 @@ _bt_spools_heapscan(Relation heap, Relation index, BTBuildState *buildstate,
*/ */
buildstate->spool->sortstate = buildstate->spool->sortstate =
tuplesort_begin_index_btree(heap, index, buildstate->isunique, tuplesort_begin_index_btree(heap, index, buildstate->isunique,
buildstate->nulls_not_distinct,
maintenance_work_mem, coordinate, maintenance_work_mem, coordinate,
false); false);
@ -468,7 +474,7 @@ _bt_spools_heapscan(Relation heap, Relation index, BTBuildState *buildstate,
* full, so we give it only work_mem * full, so we give it only work_mem
*/ */
buildstate->spool2->sortstate = buildstate->spool2->sortstate =
tuplesort_begin_index_btree(heap, index, false, work_mem, tuplesort_begin_index_btree(heap, index, false, false, work_mem,
coordinate2, false); coordinate2, false);
} }
@ -1554,6 +1560,7 @@ _bt_begin_parallel(BTBuildState *buildstate, bool isconcurrent, int request)
btshared->heaprelid = RelationGetRelid(btspool->heap); btshared->heaprelid = RelationGetRelid(btspool->heap);
btshared->indexrelid = RelationGetRelid(btspool->index); btshared->indexrelid = RelationGetRelid(btspool->index);
btshared->isunique = btspool->isunique; btshared->isunique = btspool->isunique;
btshared->nulls_not_distinct = btspool->nulls_not_distinct;
btshared->isconcurrent = isconcurrent; btshared->isconcurrent = isconcurrent;
btshared->scantuplesortstates = scantuplesortstates; btshared->scantuplesortstates = scantuplesortstates;
ConditionVariableInit(&btshared->workersdonecv); ConditionVariableInit(&btshared->workersdonecv);
@ -1747,6 +1754,7 @@ _bt_leader_participate_as_worker(BTBuildState *buildstate)
leaderworker->heap = buildstate->spool->heap; leaderworker->heap = buildstate->spool->heap;
leaderworker->index = buildstate->spool->index; leaderworker->index = buildstate->spool->index;
leaderworker->isunique = buildstate->spool->isunique; leaderworker->isunique = buildstate->spool->isunique;
leaderworker->nulls_not_distinct = buildstate->spool->nulls_not_distinct;
/* Initialize second spool, if required */ /* Initialize second spool, if required */
if (!btleader->btshared->isunique) if (!btleader->btshared->isunique)
@ -1846,6 +1854,7 @@ _bt_parallel_build_main(dsm_segment *seg, shm_toc *toc)
btspool->heap = heapRel; btspool->heap = heapRel;
btspool->index = indexRel; btspool->index = indexRel;
btspool->isunique = btshared->isunique; btspool->isunique = btshared->isunique;
btspool->nulls_not_distinct = btshared->nulls_not_distinct;
/* Look up shared state private to tuplesort.c */ /* Look up shared state private to tuplesort.c */
sharedsort = shm_toc_lookup(toc, PARALLEL_KEY_TUPLESORT, false); sharedsort = shm_toc_lookup(toc, PARALLEL_KEY_TUPLESORT, false);
@ -1928,6 +1937,7 @@ _bt_parallel_scan_and_sort(BTSpool *btspool, BTSpool *btspool2,
btspool->sortstate = tuplesort_begin_index_btree(btspool->heap, btspool->sortstate = tuplesort_begin_index_btree(btspool->heap,
btspool->index, btspool->index,
btspool->isunique, btspool->isunique,
btspool->nulls_not_distinct,
sortmem, coordinate, sortmem, coordinate,
false); false);
@ -1950,13 +1960,14 @@ _bt_parallel_scan_and_sort(BTSpool *btspool, BTSpool *btspool2,
coordinate2->nParticipants = -1; coordinate2->nParticipants = -1;
coordinate2->sharedsort = sharedsort2; coordinate2->sharedsort = sharedsort2;
btspool2->sortstate = btspool2->sortstate =
tuplesort_begin_index_btree(btspool->heap, btspool->index, false, tuplesort_begin_index_btree(btspool->heap, btspool->index, false, false,
Min(sortmem, work_mem), coordinate2, Min(sortmem, work_mem), coordinate2,
false); false);
} }
/* Fill in buildstate for _bt_build_callback() */ /* Fill in buildstate for _bt_build_callback() */
buildstate.isunique = btshared->isunique; buildstate.isunique = btshared->isunique;
buildstate.nulls_not_distinct = btshared->nulls_not_distinct;
buildstate.havedead = false; buildstate.havedead = false;
buildstate.heap = btspool->heap; buildstate.heap = btspool->heap;
buildstate.spool = btspool; buildstate.spool = btspool;

View File

@ -165,6 +165,13 @@ _bt_mkscankey(Relation rel, IndexTuple itup)
key->anynullkeys = true; key->anynullkeys = true;
} }
/*
* In NULLS NOT DISTINCT mode, we pretend that there are no null keys, so
* that full uniqueness check is done.
*/
if (rel->rd_index->indnullsnotdistinct)
key->anynullkeys = false;
return key; return key;
} }

View File

@ -614,6 +614,7 @@ UpdateIndexRelation(Oid indexoid,
values[Anum_pg_index_indnatts - 1] = Int16GetDatum(indexInfo->ii_NumIndexAttrs); values[Anum_pg_index_indnatts - 1] = Int16GetDatum(indexInfo->ii_NumIndexAttrs);
values[Anum_pg_index_indnkeyatts - 1] = Int16GetDatum(indexInfo->ii_NumIndexKeyAttrs); values[Anum_pg_index_indnkeyatts - 1] = Int16GetDatum(indexInfo->ii_NumIndexKeyAttrs);
values[Anum_pg_index_indisunique - 1] = BoolGetDatum(indexInfo->ii_Unique); values[Anum_pg_index_indisunique - 1] = BoolGetDatum(indexInfo->ii_Unique);
values[Anum_pg_index_indnullsnotdistinct - 1] = BoolGetDatum(indexInfo->ii_NullsNotDistinct);
values[Anum_pg_index_indisprimary - 1] = BoolGetDatum(primary); values[Anum_pg_index_indisprimary - 1] = BoolGetDatum(primary);
values[Anum_pg_index_indisexclusion - 1] = BoolGetDatum(isexclusion); values[Anum_pg_index_indisexclusion - 1] = BoolGetDatum(isexclusion);
values[Anum_pg_index_indimmediate - 1] = BoolGetDatum(immediate); values[Anum_pg_index_indimmediate - 1] = BoolGetDatum(immediate);
@ -1368,6 +1369,7 @@ index_concurrently_create_copy(Relation heapRelation, Oid oldIndexId,
indexExprs, indexExprs,
indexPreds, indexPreds,
oldInfo->ii_Unique, oldInfo->ii_Unique,
oldInfo->ii_NullsNotDistinct,
false, /* not ready for inserts */ false, /* not ready for inserts */
true); true);
@ -2440,6 +2442,7 @@ BuildIndexInfo(Relation index)
RelationGetIndexExpressions(index), RelationGetIndexExpressions(index),
RelationGetIndexPredicate(index), RelationGetIndexPredicate(index),
indexStruct->indisunique, indexStruct->indisunique,
indexStruct->indnullsnotdistinct,
indexStruct->indisready, indexStruct->indisready,
false); false);
@ -2499,6 +2502,7 @@ BuildDummyIndexInfo(Relation index)
RelationGetDummyIndexExpressions(index), RelationGetDummyIndexExpressions(index),
NIL, NIL,
indexStruct->indisunique, indexStruct->indisunique,
indexStruct->indnullsnotdistinct,
indexStruct->indisready, indexStruct->indisready,
false); false);
@ -2532,6 +2536,9 @@ CompareIndexInfo(IndexInfo *info1, IndexInfo *info2,
if (info1->ii_Unique != info2->ii_Unique) if (info1->ii_Unique != info2->ii_Unique)
return false; return false;
if (info1->ii_NullsNotDistinct != info2->ii_NullsNotDistinct)
return false;
/* indexes are only equivalent if they have the same access method */ /* indexes are only equivalent if they have the same access method */
if (info1->ii_Am != info2->ii_Am) if (info1->ii_Am != info2->ii_Am)
return false; return false;

View File

@ -1838,7 +1838,11 @@ CREATE VIEW table_constraints AS
AS is_deferrable, AS is_deferrable,
CAST(CASE WHEN c.condeferred THEN 'YES' ELSE 'NO' END AS yes_or_no) CAST(CASE WHEN c.condeferred THEN 'YES' ELSE 'NO' END AS yes_or_no)
AS initially_deferred, AS initially_deferred,
CAST('YES' AS yes_or_no) AS enforced CAST('YES' AS yes_or_no) AS enforced,
CAST(CASE WHEN c.contype = 'u'
THEN CASE WHEN (SELECT NOT indnullsnotdistinct FROM pg_index WHERE indexrelid = conindid) THEN 'YES' ELSE 'NO' END
END
AS yes_or_no) AS nulls_distinct
FROM pg_namespace nc, FROM pg_namespace nc,
pg_namespace nr, pg_namespace nr,
@ -1868,7 +1872,8 @@ CREATE VIEW table_constraints AS
CAST('CHECK' AS character_data) AS constraint_type, CAST('CHECK' AS character_data) AS constraint_type,
CAST('NO' AS yes_or_no) AS is_deferrable, CAST('NO' AS yes_or_no) AS is_deferrable,
CAST('NO' AS yes_or_no) AS initially_deferred, CAST('NO' AS yes_or_no) AS initially_deferred,
CAST('YES' AS yes_or_no) AS enforced CAST('YES' AS yes_or_no) AS enforced,
CAST(NULL AS yes_or_no) AS nulls_distinct
FROM pg_namespace nr, FROM pg_namespace nr,
pg_class r, pg_class r,

View File

@ -228,6 +228,7 @@ F263 Comma-separated predicates in simple CASE expression NO
F271 Compound character literals YES F271 Compound character literals YES
F281 LIKE enhancements YES F281 LIKE enhancements YES
F291 UNIQUE predicate NO F291 UNIQUE predicate NO
F292 UNIQUE null treatment YES SQL:202x draft
F301 CORRESPONDING in query expressions NO F301 CORRESPONDING in query expressions NO
F302 INTERSECT table operator YES F302 INTERSECT table operator YES
F302 INTERSECT table operator 01 INTERSECT DISTINCT table operator YES F302 INTERSECT table operator 01 INTERSECT DISTINCT table operator YES

View File

@ -301,6 +301,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
indexInfo->ii_ExclusionStrats = NULL; indexInfo->ii_ExclusionStrats = NULL;
indexInfo->ii_OpclassOptions = NULL; indexInfo->ii_OpclassOptions = NULL;
indexInfo->ii_Unique = true; indexInfo->ii_Unique = true;
indexInfo->ii_NullsNotDistinct = false;
indexInfo->ii_ReadyForInserts = true; indexInfo->ii_ReadyForInserts = true;
indexInfo->ii_CheckedUnchanged = false; indexInfo->ii_CheckedUnchanged = false;
indexInfo->ii_IndexUnchanged = false; indexInfo->ii_IndexUnchanged = false;

View File

@ -226,7 +226,7 @@ CheckIndexCompatible(Oid oldId,
* ii_NumIndexKeyAttrs with same value. * ii_NumIndexKeyAttrs with same value.
*/ */
indexInfo = makeIndexInfo(numberOfAttributes, numberOfAttributes, indexInfo = makeIndexInfo(numberOfAttributes, numberOfAttributes,
accessMethodId, NIL, NIL, false, false, false); accessMethodId, NIL, NIL, false, false, false, false);
typeObjectId = (Oid *) palloc(numberOfAttributes * sizeof(Oid)); typeObjectId = (Oid *) palloc(numberOfAttributes * sizeof(Oid));
collationObjectId = (Oid *) palloc(numberOfAttributes * sizeof(Oid)); collationObjectId = (Oid *) palloc(numberOfAttributes * sizeof(Oid));
classObjectId = (Oid *) palloc(numberOfAttributes * sizeof(Oid)); classObjectId = (Oid *) palloc(numberOfAttributes * sizeof(Oid));
@ -867,6 +867,7 @@ DefineIndex(Oid relationId,
NIL, /* expressions, NIL for now */ NIL, /* expressions, NIL for now */
make_ands_implicit((Expr *) stmt->whereClause), make_ands_implicit((Expr *) stmt->whereClause),
stmt->unique, stmt->unique,
stmt->nulls_not_distinct,
!concurrent, !concurrent,
concurrent); concurrent);

View File

@ -3072,6 +3072,7 @@ _copyConstraint(const Constraint *from)
COPY_NODE_FIELD(raw_expr); COPY_NODE_FIELD(raw_expr);
COPY_STRING_FIELD(cooked_expr); COPY_STRING_FIELD(cooked_expr);
COPY_SCALAR_FIELD(generated_when); COPY_SCALAR_FIELD(generated_when);
COPY_SCALAR_FIELD(nulls_not_distinct);
COPY_NODE_FIELD(keys); COPY_NODE_FIELD(keys);
COPY_NODE_FIELD(including); COPY_NODE_FIELD(including);
COPY_NODE_FIELD(exclusions); COPY_NODE_FIELD(exclusions);
@ -3664,6 +3665,7 @@ _copyIndexStmt(const IndexStmt *from)
COPY_SCALAR_FIELD(oldCreateSubid); COPY_SCALAR_FIELD(oldCreateSubid);
COPY_SCALAR_FIELD(oldFirstRelfilenodeSubid); COPY_SCALAR_FIELD(oldFirstRelfilenodeSubid);
COPY_SCALAR_FIELD(unique); COPY_SCALAR_FIELD(unique);
COPY_SCALAR_FIELD(nulls_not_distinct);
COPY_SCALAR_FIELD(primary); COPY_SCALAR_FIELD(primary);
COPY_SCALAR_FIELD(isconstraint); COPY_SCALAR_FIELD(isconstraint);
COPY_SCALAR_FIELD(deferrable); COPY_SCALAR_FIELD(deferrable);

View File

@ -1400,6 +1400,7 @@ _equalIndexStmt(const IndexStmt *a, const IndexStmt *b)
COMPARE_SCALAR_FIELD(oldCreateSubid); COMPARE_SCALAR_FIELD(oldCreateSubid);
COMPARE_SCALAR_FIELD(oldFirstRelfilenodeSubid); COMPARE_SCALAR_FIELD(oldFirstRelfilenodeSubid);
COMPARE_SCALAR_FIELD(unique); COMPARE_SCALAR_FIELD(unique);
COMPARE_SCALAR_FIELD(nulls_not_distinct);
COMPARE_SCALAR_FIELD(primary); COMPARE_SCALAR_FIELD(primary);
COMPARE_SCALAR_FIELD(isconstraint); COMPARE_SCALAR_FIELD(isconstraint);
COMPARE_SCALAR_FIELD(deferrable); COMPARE_SCALAR_FIELD(deferrable);
@ -2710,6 +2711,7 @@ _equalConstraint(const Constraint *a, const Constraint *b)
COMPARE_NODE_FIELD(raw_expr); COMPARE_NODE_FIELD(raw_expr);
COMPARE_STRING_FIELD(cooked_expr); COMPARE_STRING_FIELD(cooked_expr);
COMPARE_SCALAR_FIELD(generated_when); COMPARE_SCALAR_FIELD(generated_when);
COMPARE_SCALAR_FIELD(nulls_not_distinct);
COMPARE_NODE_FIELD(keys); COMPARE_NODE_FIELD(keys);
COMPARE_NODE_FIELD(including); COMPARE_NODE_FIELD(including);
COMPARE_NODE_FIELD(exclusions); COMPARE_NODE_FIELD(exclusions);

View File

@ -741,7 +741,7 @@ make_ands_implicit(Expr *clause)
*/ */
IndexInfo * IndexInfo *
makeIndexInfo(int numattrs, int numkeyattrs, Oid amoid, List *expressions, makeIndexInfo(int numattrs, int numkeyattrs, Oid amoid, List *expressions,
List *predicates, bool unique, bool isready, bool concurrent) List *predicates, bool unique, bool nulls_not_distinct, bool isready, bool concurrent)
{ {
IndexInfo *n = makeNode(IndexInfo); IndexInfo *n = makeNode(IndexInfo);
@ -750,6 +750,7 @@ makeIndexInfo(int numattrs, int numkeyattrs, Oid amoid, List *expressions,
Assert(n->ii_NumIndexKeyAttrs != 0); Assert(n->ii_NumIndexKeyAttrs != 0);
Assert(n->ii_NumIndexKeyAttrs <= n->ii_NumIndexAttrs); Assert(n->ii_NumIndexKeyAttrs <= n->ii_NumIndexAttrs);
n->ii_Unique = unique; n->ii_Unique = unique;
n->ii_NullsNotDistinct = nulls_not_distinct;
n->ii_ReadyForInserts = isready; n->ii_ReadyForInserts = isready;
n->ii_CheckedUnchanged = false; n->ii_CheckedUnchanged = false;
n->ii_IndexUnchanged = false; n->ii_IndexUnchanged = false;

View File

@ -2775,6 +2775,7 @@ _outIndexStmt(StringInfo str, const IndexStmt *node)
WRITE_UINT_FIELD(oldCreateSubid); WRITE_UINT_FIELD(oldCreateSubid);
WRITE_UINT_FIELD(oldFirstRelfilenodeSubid); WRITE_UINT_FIELD(oldFirstRelfilenodeSubid);
WRITE_BOOL_FIELD(unique); WRITE_BOOL_FIELD(unique);
WRITE_BOOL_FIELD(nulls_not_distinct);
WRITE_BOOL_FIELD(primary); WRITE_BOOL_FIELD(primary);
WRITE_BOOL_FIELD(isconstraint); WRITE_BOOL_FIELD(isconstraint);
WRITE_BOOL_FIELD(deferrable); WRITE_BOOL_FIELD(deferrable);
@ -3713,6 +3714,7 @@ _outConstraint(StringInfo str, const Constraint *node)
case CONSTR_UNIQUE: case CONSTR_UNIQUE:
appendStringInfoString(str, "UNIQUE"); appendStringInfoString(str, "UNIQUE");
WRITE_BOOL_FIELD(nulls_not_distinct);
WRITE_NODE_FIELD(keys); WRITE_NODE_FIELD(keys);
WRITE_NODE_FIELD(including); WRITE_NODE_FIELD(including);
WRITE_NODE_FIELD(options); WRITE_NODE_FIELD(options);

View File

@ -625,6 +625,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <ival> opt_window_exclusion_clause %type <ival> opt_window_exclusion_clause
%type <str> opt_existing_window_name %type <str> opt_existing_window_name
%type <boolean> opt_if_not_exists %type <boolean> opt_if_not_exists
%type <boolean> opt_unique_null_treatment
%type <ival> generated_when override_kind %type <ival> generated_when override_kind
%type <partspec> PartitionSpec OptPartitionSpec %type <partspec> PartitionSpec OptPartitionSpec
%type <partelem> part_elem %type <partelem> part_elem
@ -3623,15 +3624,16 @@ ColConstraintElem:
n->location = @1; n->location = @1;
$$ = (Node *)n; $$ = (Node *)n;
} }
| UNIQUE opt_definition OptConsTableSpace | UNIQUE opt_unique_null_treatment opt_definition OptConsTableSpace
{ {
Constraint *n = makeNode(Constraint); Constraint *n = makeNode(Constraint);
n->contype = CONSTR_UNIQUE; n->contype = CONSTR_UNIQUE;
n->location = @1; n->location = @1;
n->nulls_not_distinct = !$2;
n->keys = NULL; n->keys = NULL;
n->options = $2; n->options = $3;
n->indexname = NULL; n->indexname = NULL;
n->indexspace = $3; n->indexspace = $4;
$$ = (Node *)n; $$ = (Node *)n;
} }
| PRIMARY KEY opt_definition OptConsTableSpace | PRIMARY KEY opt_definition OptConsTableSpace
@ -3716,6 +3718,12 @@ ColConstraintElem:
} }
; ;
opt_unique_null_treatment:
NULLS_P DISTINCT { $$ = true; }
| NULLS_P NOT DISTINCT { $$ = false; }
| /*EMPTY*/ { $$ = true; }
;
generated_when: generated_when:
ALWAYS { $$ = ATTRIBUTE_IDENTITY_ALWAYS; } ALWAYS { $$ = ATTRIBUTE_IDENTITY_ALWAYS; }
| BY DEFAULT { $$ = ATTRIBUTE_IDENTITY_BY_DEFAULT; } | BY DEFAULT { $$ = ATTRIBUTE_IDENTITY_BY_DEFAULT; }
@ -3828,18 +3836,19 @@ ConstraintElem:
n->initially_valid = !n->skip_validation; n->initially_valid = !n->skip_validation;
$$ = (Node *)n; $$ = (Node *)n;
} }
| UNIQUE '(' columnList ')' opt_c_include opt_definition OptConsTableSpace | UNIQUE opt_unique_null_treatment '(' columnList ')' opt_c_include opt_definition OptConsTableSpace
ConstraintAttributeSpec ConstraintAttributeSpec
{ {
Constraint *n = makeNode(Constraint); Constraint *n = makeNode(Constraint);
n->contype = CONSTR_UNIQUE; n->contype = CONSTR_UNIQUE;
n->location = @1; n->location = @1;
n->keys = $3; n->nulls_not_distinct = !$2;
n->including = $5; n->keys = $4;
n->options = $6; n->including = $6;
n->options = $7;
n->indexname = NULL; n->indexname = NULL;
n->indexspace = $7; n->indexspace = $8;
processCASbits($8, @8, "UNIQUE", processCASbits($9, @9, "UNIQUE",
&n->deferrable, &n->initdeferred, NULL, &n->deferrable, &n->initdeferred, NULL,
NULL, yyscanner); NULL, yyscanner);
$$ = (Node *)n; $$ = (Node *)n;
@ -7411,7 +7420,7 @@ defacl_privilege_target:
IndexStmt: CREATE opt_unique INDEX opt_concurrently opt_index_name IndexStmt: CREATE opt_unique INDEX opt_concurrently opt_index_name
ON relation_expr access_method_clause '(' index_params ')' ON relation_expr access_method_clause '(' index_params ')'
opt_include opt_reloptions OptTableSpace where_clause opt_include opt_unique_null_treatment opt_reloptions OptTableSpace where_clause
{ {
IndexStmt *n = makeNode(IndexStmt); IndexStmt *n = makeNode(IndexStmt);
n->unique = $2; n->unique = $2;
@ -7421,9 +7430,10 @@ IndexStmt: CREATE opt_unique INDEX opt_concurrently opt_index_name
n->accessMethod = $8; n->accessMethod = $8;
n->indexParams = $10; n->indexParams = $10;
n->indexIncludingParams = $12; n->indexIncludingParams = $12;
n->options = $13; n->nulls_not_distinct = !$13;
n->tableSpace = $14; n->options = $14;
n->whereClause = $15; n->tableSpace = $15;
n->whereClause = $16;
n->excludeOpNames = NIL; n->excludeOpNames = NIL;
n->idxcomment = NULL; n->idxcomment = NULL;
n->indexOid = InvalidOid; n->indexOid = InvalidOid;
@ -7441,7 +7451,7 @@ IndexStmt: CREATE opt_unique INDEX opt_concurrently opt_index_name
} }
| CREATE opt_unique INDEX opt_concurrently IF_P NOT EXISTS name | CREATE opt_unique INDEX opt_concurrently IF_P NOT EXISTS name
ON relation_expr access_method_clause '(' index_params ')' ON relation_expr access_method_clause '(' index_params ')'
opt_include opt_reloptions OptTableSpace where_clause opt_include opt_unique_null_treatment opt_reloptions OptTableSpace where_clause
{ {
IndexStmt *n = makeNode(IndexStmt); IndexStmt *n = makeNode(IndexStmt);
n->unique = $2; n->unique = $2;
@ -7451,9 +7461,10 @@ IndexStmt: CREATE opt_unique INDEX opt_concurrently opt_index_name
n->accessMethod = $11; n->accessMethod = $11;
n->indexParams = $13; n->indexParams = $13;
n->indexIncludingParams = $15; n->indexIncludingParams = $15;
n->options = $16; n->nulls_not_distinct = !$16;
n->tableSpace = $17; n->options = $17;
n->whereClause = $18; n->tableSpace = $18;
n->whereClause = $19;
n->excludeOpNames = NIL; n->excludeOpNames = NIL;
n->idxcomment = NULL; n->idxcomment = NULL;
n->indexOid = InvalidOid; n->indexOid = InvalidOid;
@ -13802,7 +13813,7 @@ a_expr: c_expr { $$ = $1; }
else else
$$ = (Node *) makeA_Expr(AEXPR_OP_ALL, $2, $1, $5, @2); $$ = (Node *) makeA_Expr(AEXPR_OP_ALL, $2, $1, $5, @2);
} }
| UNIQUE select_with_parens | UNIQUE opt_unique_null_treatment select_with_parens
{ {
/* Not sure how to get rid of the parentheses /* Not sure how to get rid of the parentheses
* but there are lots of shift/reduce errors without them. * but there are lots of shift/reduce errors without them.

View File

@ -1581,6 +1581,7 @@ generateClonedIndexStmt(RangeVar *heapRel, Relation source_idx,
index->oldCreateSubid = InvalidSubTransactionId; index->oldCreateSubid = InvalidSubTransactionId;
index->oldFirstRelfilenodeSubid = InvalidSubTransactionId; index->oldFirstRelfilenodeSubid = InvalidSubTransactionId;
index->unique = idxrec->indisunique; index->unique = idxrec->indisunique;
index->nulls_not_distinct = idxrec->indnullsnotdistinct;
index->primary = idxrec->indisprimary; index->primary = idxrec->indisprimary;
index->transformed = true; /* don't need transformIndexStmt */ index->transformed = true; /* don't need transformIndexStmt */
index->concurrent = false; index->concurrent = false;
@ -2112,6 +2113,7 @@ transformIndexConstraints(CreateStmtContext *cxt)
equal(index->whereClause, priorindex->whereClause) && equal(index->whereClause, priorindex->whereClause) &&
equal(index->excludeOpNames, priorindex->excludeOpNames) && equal(index->excludeOpNames, priorindex->excludeOpNames) &&
strcmp(index->accessMethod, priorindex->accessMethod) == 0 && strcmp(index->accessMethod, priorindex->accessMethod) == 0 &&
index->nulls_not_distinct == priorindex->nulls_not_distinct &&
index->deferrable == priorindex->deferrable && index->deferrable == priorindex->deferrable &&
index->initdeferred == priorindex->initdeferred) index->initdeferred == priorindex->initdeferred)
{ {
@ -2178,6 +2180,7 @@ transformIndexConstraint(Constraint *constraint, CreateStmtContext *cxt)
* DefineIndex will check for it. * DefineIndex will check for it.
*/ */
} }
index->nulls_not_distinct = constraint->nulls_not_distinct;
index->isconstraint = true; index->isconstraint = true;
index->deferrable = constraint->deferrable; index->deferrable = constraint->deferrable;
index->initdeferred = constraint->initdeferred; index->initdeferred = constraint->initdeferred;

View File

@ -1444,6 +1444,9 @@ pg_get_indexdef_worker(Oid indexrelid, int colno,
{ {
appendStringInfoChar(&buf, ')'); appendStringInfoChar(&buf, ')');
if (idxrec->indnullsnotdistinct)
appendStringInfo(&buf, " NULLS NOT DISTINCT");
/* /*
* If it has options, append "WITH (options)" * If it has options, append "WITH (options)"
*/ */
@ -2312,9 +2315,20 @@ pg_get_constraintdef_worker(Oid constraintId, bool fullCommand,
/* Start off the constraint definition */ /* Start off the constraint definition */
if (conForm->contype == CONSTRAINT_PRIMARY) if (conForm->contype == CONSTRAINT_PRIMARY)
appendStringInfoString(&buf, "PRIMARY KEY ("); appendStringInfoString(&buf, "PRIMARY KEY ");
else else
appendStringInfoString(&buf, "UNIQUE ("); appendStringInfoString(&buf, "UNIQUE ");
indexId = conForm->conindid;
indtup = SearchSysCache1(INDEXRELID, ObjectIdGetDatum(indexId));
if (!HeapTupleIsValid(indtup))
elog(ERROR, "cache lookup failed for index %u", indexId);
if (conForm->contype == CONSTRAINT_UNIQUE &&
((Form_pg_index) GETSTRUCT(indtup))->indnullsnotdistinct)
appendStringInfoString(&buf, "NULLS NOT DISTINCT ");
appendStringInfoString(&buf, "(");
/* Fetch and build target column list */ /* Fetch and build target column list */
val = SysCacheGetAttr(CONSTROID, tup, val = SysCacheGetAttr(CONSTROID, tup,
@ -2327,12 +2341,7 @@ pg_get_constraintdef_worker(Oid constraintId, bool fullCommand,
appendStringInfoChar(&buf, ')'); appendStringInfoChar(&buf, ')');
indexId = conForm->conindid;
/* Build including column list (from pg_index.indkeys) */ /* Build including column list (from pg_index.indkeys) */
indtup = SearchSysCache1(INDEXRELID, ObjectIdGetDatum(indexId));
if (!HeapTupleIsValid(indtup))
elog(ERROR, "cache lookup failed for index %u", indexId);
val = SysCacheGetAttr(INDEXRELID, indtup, val = SysCacheGetAttr(INDEXRELID, indtup,
Anum_pg_index_indnatts, &isnull); Anum_pg_index_indnatts, &isnull);
if (isnull) if (isnull)

View File

@ -2274,6 +2274,7 @@ RelationReloadIndexInfo(Relation relation)
* the array fields are allowed to change, though. * the array fields are allowed to change, though.
*/ */
relation->rd_index->indisunique = index->indisunique; relation->rd_index->indisunique = index->indisunique;
relation->rd_index->indnullsnotdistinct = index->indnullsnotdistinct;
relation->rd_index->indisprimary = index->indisprimary; relation->rd_index->indisprimary = index->indisprimary;
relation->rd_index->indisexclusion = index->indisexclusion; relation->rd_index->indisexclusion = index->indisexclusion;
relation->rd_index->indimmediate = index->indimmediate; relation->rd_index->indimmediate = index->indimmediate;

View File

@ -459,6 +459,7 @@ struct Tuplesortstate
/* These are specific to the index_btree subcase: */ /* These are specific to the index_btree subcase: */
bool enforceUnique; /* complain if we find duplicate tuples */ bool enforceUnique; /* complain if we find duplicate tuples */
bool uniqueNullsNotDistinct; /* unique constraint null treatment */
/* These are specific to the index_hash subcase: */ /* These are specific to the index_hash subcase: */
uint32 high_mask; /* masks for sortable part of hash code */ uint32 high_mask; /* masks for sortable part of hash code */
@ -1065,6 +1066,7 @@ Tuplesortstate *
tuplesort_begin_index_btree(Relation heapRel, tuplesort_begin_index_btree(Relation heapRel,
Relation indexRel, Relation indexRel,
bool enforceUnique, bool enforceUnique,
bool uniqueNullsNotDistinct,
int workMem, int workMem,
SortCoordinate coordinate, SortCoordinate coordinate,
bool randomAccess) bool randomAccess)
@ -1103,6 +1105,7 @@ tuplesort_begin_index_btree(Relation heapRel,
state->heapRel = heapRel; state->heapRel = heapRel;
state->indexRel = indexRel; state->indexRel = indexRel;
state->enforceUnique = enforceUnique; state->enforceUnique = enforceUnique;
state->uniqueNullsNotDistinct = uniqueNullsNotDistinct;
indexScanKey = _bt_mkscankey(indexRel, NULL); indexScanKey = _bt_mkscankey(indexRel, NULL);
@ -4200,14 +4203,15 @@ comparetup_index_btree(const SortTuple *a, const SortTuple *b,
/* /*
* If btree has asked us to enforce uniqueness, complain if two equal * If btree has asked us to enforce uniqueness, complain if two equal
* tuples are detected (unless there was at least one NULL field). * tuples are detected (unless there was at least one NULL field and NULLS
* NOT DISTINCT was not set).
* *
* It is sufficient to make the test here, because if two tuples are equal * It is sufficient to make the test here, because if two tuples are equal
* they *must* get compared at some stage of the sort --- otherwise the * they *must* get compared at some stage of the sort --- otherwise the
* sort algorithm wouldn't have checked whether one must appear before the * sort algorithm wouldn't have checked whether one must appear before the
* other. * other.
*/ */
if (state->enforceUnique && !equal_hasnull) if (state->enforceUnique && !(!state->uniqueNullsNotDistinct && equal_hasnull))
{ {
Datum values[INDEX_MAX_KEYS]; Datum values[INDEX_MAX_KEYS];
bool isnull[INDEX_MAX_KEYS]; bool isnull[INDEX_MAX_KEYS];

View File

@ -6482,6 +6482,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
i_indkey, i_indkey,
i_indisclustered, i_indisclustered,
i_indisreplident, i_indisreplident,
i_indnullsnotdistinct,
i_contype, i_contype,
i_conname, i_conname,
i_condeferrable, i_condeferrable,
@ -6558,14 +6559,21 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
"(SELECT pg_catalog.array_agg(attstattarget ORDER BY attnum) " "(SELECT pg_catalog.array_agg(attstattarget ORDER BY attnum) "
" FROM pg_catalog.pg_attribute " " FROM pg_catalog.pg_attribute "
" WHERE attrelid = i.indexrelid AND " " WHERE attrelid = i.indexrelid AND "
" attstattarget >= 0) AS indstatvals "); " attstattarget >= 0) AS indstatvals, ");
else else
appendPQExpBuffer(query, appendPQExpBuffer(query,
"0 AS parentidx, " "0 AS parentidx, "
"i.indnatts AS indnkeyatts, " "i.indnatts AS indnkeyatts, "
"i.indnatts AS indnatts, " "i.indnatts AS indnatts, "
"'' AS indstatcols, " "'' AS indstatcols, "
"'' AS indstatvals "); "'' AS indstatvals, ");
if (fout->remoteVersion >= 150000)
appendPQExpBuffer(query,
"i.indnullsnotdistinct ");
else
appendPQExpBuffer(query,
"false AS indnullsnotdistinct ");
/* /*
* The point of the messy-looking outer join is to find a constraint that * The point of the messy-looking outer join is to find a constraint that
@ -6628,6 +6636,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
i_indkey = PQfnumber(res, "indkey"); i_indkey = PQfnumber(res, "indkey");
i_indisclustered = PQfnumber(res, "indisclustered"); i_indisclustered = PQfnumber(res, "indisclustered");
i_indisreplident = PQfnumber(res, "indisreplident"); i_indisreplident = PQfnumber(res, "indisreplident");
i_indnullsnotdistinct = PQfnumber(res, "indnullsnotdistinct");
i_contype = PQfnumber(res, "contype"); i_contype = PQfnumber(res, "contype");
i_conname = PQfnumber(res, "conname"); i_conname = PQfnumber(res, "conname");
i_condeferrable = PQfnumber(res, "condeferrable"); i_condeferrable = PQfnumber(res, "condeferrable");
@ -6704,6 +6713,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
indxinfo[j].indkeys, indxinfo[j].indnattrs); indxinfo[j].indkeys, indxinfo[j].indnattrs);
indxinfo[j].indisclustered = (PQgetvalue(res, j, i_indisclustered)[0] == 't'); indxinfo[j].indisclustered = (PQgetvalue(res, j, i_indisclustered)[0] == 't');
indxinfo[j].indisreplident = (PQgetvalue(res, j, i_indisreplident)[0] == 't'); indxinfo[j].indisreplident = (PQgetvalue(res, j, i_indisreplident)[0] == 't');
indxinfo[j].indnullsnotdistinct = (PQgetvalue(res, j, i_indnullsnotdistinct)[0] == 't');
indxinfo[j].parentidx = atooid(PQgetvalue(res, j, i_parentidx)); indxinfo[j].parentidx = atooid(PQgetvalue(res, j, i_parentidx));
indxinfo[j].partattaches = (SimplePtrList) indxinfo[j].partattaches = (SimplePtrList)
{ {
@ -16120,8 +16130,11 @@ dumpConstraint(Archive *fout, const ConstraintInfo *coninfo)
} }
else else
{ {
appendPQExpBuffer(q, "%s (", appendPQExpBuffer(q, "%s",
coninfo->contype == 'p' ? "PRIMARY KEY" : "UNIQUE"); coninfo->contype == 'p' ? "PRIMARY KEY" : "UNIQUE");
if (indxinfo->indnullsnotdistinct)
appendPQExpBuffer(q, " NULLS NOT DISTINCT");
appendPQExpBuffer(q, " (");
for (k = 0; k < indxinfo->indnkeyattrs; k++) for (k = 0; k < indxinfo->indnkeyattrs; k++)
{ {
int indkey = (int) indxinfo->indkeys[k]; int indkey = (int) indxinfo->indkeys[k];

View File

@ -398,6 +398,7 @@ typedef struct _indxInfo
* contains both key and nonkey attributes */ * contains both key and nonkey attributes */
bool indisclustered; bool indisclustered;
bool indisreplident; bool indisreplident;
bool indnullsnotdistinct;
Oid parentidx; /* if a partition, parent index OID */ Oid parentidx; /* if a partition, parent index OID */
SimplePtrList partattaches; /* if partitioned, partition attach objects */ SimplePtrList partattaches; /* if partitioned, partition attach objects */

View File

@ -2181,6 +2181,11 @@ describeOneTableDetails(const char *schemaname,
else else
appendPQExpBufferStr(&buf, "false AS indisreplident,\n"); appendPQExpBufferStr(&buf, "false AS indisreplident,\n");
if (pset.sversion >= 150000)
appendPQExpBufferStr(&buf, "i.indnullsnotdistinct,\n");
else
appendPQExpBufferStr(&buf, "false AS indnullsnotdistinct,\n");
appendPQExpBuffer(&buf, " a.amname, c2.relname, " appendPQExpBuffer(&buf, " a.amname, c2.relname, "
"pg_catalog.pg_get_expr(i.indpred, i.indrelid, true)\n" "pg_catalog.pg_get_expr(i.indpred, i.indrelid, true)\n"
"FROM pg_catalog.pg_index i, pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_am a\n" "FROM pg_catalog.pg_index i, pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_am a\n"
@ -2205,14 +2210,20 @@ describeOneTableDetails(const char *schemaname,
char *deferrable = PQgetvalue(result, 0, 4); char *deferrable = PQgetvalue(result, 0, 4);
char *deferred = PQgetvalue(result, 0, 5); char *deferred = PQgetvalue(result, 0, 5);
char *indisreplident = PQgetvalue(result, 0, 6); char *indisreplident = PQgetvalue(result, 0, 6);
char *indamname = PQgetvalue(result, 0, 7); char *indnullsnotdistinct = PQgetvalue(result, 0, 7);
char *indtable = PQgetvalue(result, 0, 8); char *indamname = PQgetvalue(result, 0, 8);
char *indpred = PQgetvalue(result, 0, 9); char *indtable = PQgetvalue(result, 0, 9);
char *indpred = PQgetvalue(result, 0, 10);
if (strcmp(indisprimary, "t") == 0) if (strcmp(indisprimary, "t") == 0)
printfPQExpBuffer(&tmpbuf, _("primary key, ")); printfPQExpBuffer(&tmpbuf, _("primary key, "));
else if (strcmp(indisunique, "t") == 0) else if (strcmp(indisunique, "t") == 0)
printfPQExpBuffer(&tmpbuf, _("unique, ")); {
printfPQExpBuffer(&tmpbuf, _("unique"));
if (strcmp(indnullsnotdistinct, "t") == 0)
appendPQExpBufferStr(&tmpbuf, _(" nulls not distinct"));
appendPQExpBuffer(&tmpbuf, _(", "));
}
else else
resetPQExpBuffer(&tmpbuf); resetPQExpBuffer(&tmpbuf);
appendPQExpBuffer(&tmpbuf, "%s, ", indamname); appendPQExpBuffer(&tmpbuf, "%s, ", indamname);

View File

@ -53,6 +53,6 @@
*/ */
/* yyyymmddN */ /* yyyymmddN */
#define CATALOG_VERSION_NO 202201311 #define CATALOG_VERSION_NO 202202031
#endif #endif

View File

@ -34,6 +34,7 @@ CATALOG(pg_index,2610,IndexRelationId) BKI_SCHEMA_MACRO
int16 indnatts; /* total number of columns in index */ int16 indnatts; /* total number of columns in index */
int16 indnkeyatts; /* number of key columns in index */ int16 indnkeyatts; /* number of key columns in index */
bool indisunique; /* is this a unique index? */ bool indisunique; /* is this a unique index? */
bool indnullsnotdistinct; /* null treatment in unique index */
bool indisprimary; /* is this index for primary key? */ bool indisprimary; /* is this index for primary key? */
bool indisexclusion; /* is this index for exclusion constraint? */ bool indisexclusion; /* is this index for exclusion constraint? */
bool indimmediate; /* is uniqueness enforced immediately? */ bool indimmediate; /* is uniqueness enforced immediately? */

View File

@ -173,6 +173,7 @@ typedef struct IndexInfo
uint16 *ii_UniqueStrats; /* array with one entry per column */ uint16 *ii_UniqueStrats; /* array with one entry per column */
Datum *ii_OpclassOptions; /* array with one entry per column */ Datum *ii_OpclassOptions; /* array with one entry per column */
bool ii_Unique; bool ii_Unique;
bool ii_NullsNotDistinct;
bool ii_ReadyForInserts; bool ii_ReadyForInserts;
bool ii_CheckedUnchanged; bool ii_CheckedUnchanged;
bool ii_IndexUnchanged; bool ii_IndexUnchanged;

View File

@ -96,7 +96,7 @@ extern List *make_ands_implicit(Expr *clause);
extern IndexInfo *makeIndexInfo(int numattrs, int numkeyattrs, Oid amoid, extern IndexInfo *makeIndexInfo(int numattrs, int numkeyattrs, Oid amoid,
List *expressions, List *predicates, List *expressions, List *predicates,
bool unique, bool isready, bool concurrent); bool unique, bool nulls_not_distinct, bool isready, bool concurrent);
extern DefElem *makeDefElem(char *name, Node *arg, int location); extern DefElem *makeDefElem(char *name, Node *arg, int location);
extern DefElem *makeDefElemExtended(char *nameSpace, char *name, Node *arg, extern DefElem *makeDefElemExtended(char *nameSpace, char *name, Node *arg,

View File

@ -2277,6 +2277,7 @@ typedef struct Constraint
char generated_when; /* ALWAYS or BY DEFAULT */ char generated_when; /* ALWAYS or BY DEFAULT */
/* Fields used for unique constraints (UNIQUE and PRIMARY KEY): */ /* Fields used for unique constraints (UNIQUE and PRIMARY KEY): */
bool nulls_not_distinct; /* null treatment for UNIQUE constraints */
List *keys; /* String nodes naming referenced key List *keys; /* String nodes naming referenced key
* column(s) */ * column(s) */
List *including; /* String nodes naming referenced nonkey List *including; /* String nodes naming referenced nonkey
@ -2905,6 +2906,7 @@ typedef struct IndexStmt
SubTransactionId oldFirstRelfilenodeSubid; /* rd_firstRelfilenodeSubid of SubTransactionId oldFirstRelfilenodeSubid; /* rd_firstRelfilenodeSubid of
* oldNode */ * oldNode */
bool unique; /* is index unique? */ bool unique; /* is index unique? */
bool nulls_not_distinct; /* null treatment for UNIQUE constraints */
bool primary; /* is index a primary key? */ bool primary; /* is index a primary key? */
bool isconstraint; /* is it for a pkey/unique constraint? */ bool isconstraint; /* is it for a pkey/unique constraint? */
bool deferrable; /* is the constraint DEFERRABLE? */ bool deferrable; /* is the constraint DEFERRABLE? */

View File

@ -208,6 +208,7 @@ extern Tuplesortstate *tuplesort_begin_cluster(TupleDesc tupDesc,
extern Tuplesortstate *tuplesort_begin_index_btree(Relation heapRel, extern Tuplesortstate *tuplesort_begin_index_btree(Relation heapRel,
Relation indexRel, Relation indexRel,
bool enforceUnique, bool enforceUnique,
bool uniqueNullsNotDistinct,
int workMem, SortCoordinate coordinate, int workMem, SortCoordinate coordinate,
bool randomAccess); bool randomAccess);
extern Tuplesortstate *tuplesort_begin_index_hash(Relation heapRel, extern Tuplesortstate *tuplesort_begin_index_hash(Relation heapRel,

View File

@ -445,6 +445,29 @@ SELECT * FROM UNIQUE_TBL;
6 | six-upsert-insert 6 | six-upsert-insert
(7 rows) (7 rows)
DROP TABLE UNIQUE_TBL;
CREATE TABLE UNIQUE_TBL (i int UNIQUE NULLS NOT DISTINCT, t text);
INSERT INTO UNIQUE_TBL VALUES (1, 'one');
INSERT INTO UNIQUE_TBL VALUES (2, 'two');
INSERT INTO UNIQUE_TBL VALUES (1, 'three');
ERROR: duplicate key value violates unique constraint "unique_tbl_i_key"
DETAIL: Key (i)=(1) already exists.
INSERT INTO UNIQUE_TBL VALUES (4, 'four');
INSERT INTO UNIQUE_TBL VALUES (5, 'one');
INSERT INTO UNIQUE_TBL (t) VALUES ('six');
INSERT INTO UNIQUE_TBL (t) VALUES ('seven');
ERROR: duplicate key value violates unique constraint "unique_tbl_i_key"
DETAIL: Key (i)=(null) already exists.
SELECT * FROM UNIQUE_TBL;
i | t
---+------
1 | one
2 | two
4 | four
5 | one
| six
(5 rows)
DROP TABLE UNIQUE_TBL; DROP TABLE UNIQUE_TBL;
CREATE TABLE UNIQUE_TBL (i int, t text, CREATE TABLE UNIQUE_TBL (i int, t text,
UNIQUE(i,t)); UNIQUE(i,t));

View File

@ -1272,6 +1272,67 @@ SELECT count(*) FROM tenk1 WHERE stringu1 = 'TVAAAA';
DROP INDEX hash_tuplesort_idx; DROP INDEX hash_tuplesort_idx;
RESET maintenance_work_mem; RESET maintenance_work_mem;
-- --
-- Test unique null behavior
--
CREATE TABLE unique_tbl (i int, t text);
CREATE UNIQUE INDEX unique_idx1 ON unique_tbl (i) NULLS DISTINCT;
CREATE UNIQUE INDEX unique_idx2 ON unique_tbl (i) NULLS NOT DISTINCT;
INSERT INTO unique_tbl VALUES (1, 'one');
INSERT INTO unique_tbl VALUES (2, 'two');
INSERT INTO unique_tbl VALUES (3, 'three');
INSERT INTO unique_tbl VALUES (4, 'four');
INSERT INTO unique_tbl VALUES (5, 'one');
INSERT INTO unique_tbl (t) VALUES ('six');
INSERT INTO unique_tbl (t) VALUES ('seven'); -- error from unique_idx2
ERROR: duplicate key value violates unique constraint "unique_idx2"
DETAIL: Key (i)=(null) already exists.
DROP INDEX unique_idx1, unique_idx2;
INSERT INTO unique_tbl (t) VALUES ('seven');
-- build indexes on filled table
CREATE UNIQUE INDEX unique_idx3 ON unique_tbl (i) NULLS DISTINCT; -- ok
CREATE UNIQUE INDEX unique_idx4 ON unique_tbl (i) NULLS NOT DISTINCT; -- error
ERROR: could not create unique index "unique_idx4"
DETAIL: Key (i)=(null) is duplicated.
DELETE FROM unique_tbl WHERE t = 'seven';
CREATE UNIQUE INDEX unique_idx4 ON unique_tbl (i) NULLS NOT DISTINCT; -- ok now
\d unique_tbl
Table "public.unique_tbl"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
i | integer | | |
t | text | | |
Indexes:
"unique_idx3" UNIQUE, btree (i)
"unique_idx4" UNIQUE, btree (i) NULLS NOT DISTINCT
\d unique_idx3
Index "public.unique_idx3"
Column | Type | Key? | Definition
--------+---------+------+------------
i | integer | yes | i
unique, btree, for table "public.unique_tbl"
\d unique_idx4
Index "public.unique_idx4"
Column | Type | Key? | Definition
--------+---------+------+------------
i | integer | yes | i
unique nulls not distinct, btree, for table "public.unique_tbl"
SELECT pg_get_indexdef('unique_idx3'::regclass);
pg_get_indexdef
----------------------------------------------------------------------
CREATE UNIQUE INDEX unique_idx3 ON public.unique_tbl USING btree (i)
(1 row)
SELECT pg_get_indexdef('unique_idx4'::regclass);
pg_get_indexdef
-----------------------------------------------------------------------------------------
CREATE UNIQUE INDEX unique_idx4 ON public.unique_tbl USING btree (i) NULLS NOT DISTINCT
(1 row)
DROP TABLE unique_tbl;
--
-- Test functional index -- Test functional index
-- --
CREATE TABLE func_index_heap (f1 text, f2 text); CREATE TABLE func_index_heap (f1 text, f2 text);

View File

@ -306,6 +306,20 @@ SELECT * FROM UNIQUE_TBL;
DROP TABLE UNIQUE_TBL; DROP TABLE UNIQUE_TBL;
CREATE TABLE UNIQUE_TBL (i int UNIQUE NULLS NOT DISTINCT, t text);
INSERT INTO UNIQUE_TBL VALUES (1, 'one');
INSERT INTO UNIQUE_TBL VALUES (2, 'two');
INSERT INTO UNIQUE_TBL VALUES (1, 'three');
INSERT INTO UNIQUE_TBL VALUES (4, 'four');
INSERT INTO UNIQUE_TBL VALUES (5, 'one');
INSERT INTO UNIQUE_TBL (t) VALUES ('six');
INSERT INTO UNIQUE_TBL (t) VALUES ('seven');
SELECT * FROM UNIQUE_TBL;
DROP TABLE UNIQUE_TBL;
CREATE TABLE UNIQUE_TBL (i int, t text, CREATE TABLE UNIQUE_TBL (i int, t text,
UNIQUE(i,t)); UNIQUE(i,t));

View File

@ -387,6 +387,43 @@ DROP INDEX hash_tuplesort_idx;
RESET maintenance_work_mem; RESET maintenance_work_mem;
--
-- Test unique null behavior
--
CREATE TABLE unique_tbl (i int, t text);
CREATE UNIQUE INDEX unique_idx1 ON unique_tbl (i) NULLS DISTINCT;
CREATE UNIQUE INDEX unique_idx2 ON unique_tbl (i) NULLS NOT DISTINCT;
INSERT INTO unique_tbl VALUES (1, 'one');
INSERT INTO unique_tbl VALUES (2, 'two');
INSERT INTO unique_tbl VALUES (3, 'three');
INSERT INTO unique_tbl VALUES (4, 'four');
INSERT INTO unique_tbl VALUES (5, 'one');
INSERT INTO unique_tbl (t) VALUES ('six');
INSERT INTO unique_tbl (t) VALUES ('seven'); -- error from unique_idx2
DROP INDEX unique_idx1, unique_idx2;
INSERT INTO unique_tbl (t) VALUES ('seven');
-- build indexes on filled table
CREATE UNIQUE INDEX unique_idx3 ON unique_tbl (i) NULLS DISTINCT; -- ok
CREATE UNIQUE INDEX unique_idx4 ON unique_tbl (i) NULLS NOT DISTINCT; -- error
DELETE FROM unique_tbl WHERE t = 'seven';
CREATE UNIQUE INDEX unique_idx4 ON unique_tbl (i) NULLS NOT DISTINCT; -- ok now
\d unique_tbl
\d unique_idx3
\d unique_idx4
SELECT pg_get_indexdef('unique_idx3'::regclass);
SELECT pg_get_indexdef('unique_idx4'::regclass);
DROP TABLE unique_tbl;
-- --
-- Test functional index -- Test functional index
-- --