1996-07-09 08:22:35 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
1999-02-14 00:22:53 +01:00
|
|
|
* executor.h
|
1997-09-07 07:04:48 +02:00
|
|
|
* support for the POSTGRES executor module
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*
|
2017-01-03 19:48:53 +01:00
|
|
|
* Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
|
2000-01-26 06:58:53 +01:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/include/executor/executor.h
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#ifndef EXECUTOR_H
|
|
|
|
#define EXECUTOR_H
|
|
|
|
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
#include "catalog/partition.h"
|
1999-07-16 01:04:24 +02:00
|
|
|
#include "executor/execdesc.h"
|
2007-02-20 18:32:18 +01:00
|
|
|
#include "nodes/parsenodes.h"
|
1996-11-05 09:18:44 +01:00
|
|
|
|
2002-12-05 16:50:39 +01:00
|
|
|
|
2006-02-28 05:10:28 +01:00
|
|
|
/*
|
|
|
|
* The "eflags" argument to ExecutorStart and the various ExecInitNode
|
|
|
|
* routines is a bitwise OR of the following flag bits, which tell the
|
|
|
|
* called plan node what to expect. Note that the flags will get modified
|
|
|
|
* as they are passed down the plan tree, since an upper node may require
|
|
|
|
* functionality in its subnode not demanded of the plan as a whole
|
|
|
|
* (example: MergeJoin requires mark/restore capability in its inner input),
|
|
|
|
* or an upper node may shield its input from some functionality requirement
|
|
|
|
* (example: Materialize shields its input from needing to do backward scan).
|
|
|
|
*
|
|
|
|
* EXPLAIN_ONLY indicates that the plan tree is being initialized just so
|
|
|
|
* EXPLAIN can print it out; it will not be run. Hence, no side-effects
|
Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements. The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.
In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs. Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.
Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.
Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.
Andres Freund and Tom Lane
2012-03-20 02:37:19 +01:00
|
|
|
* of startup should occur. However, error checks (such as permission checks)
|
|
|
|
* should be performed.
|
2006-02-28 05:10:28 +01:00
|
|
|
*
|
|
|
|
* REWIND indicates that the plan node should try to efficiently support
|
|
|
|
* rescans without parameter changes. (Nodes must support ExecReScan calls
|
|
|
|
* in any case, but if this flag was not given, they are at liberty to do it
|
2014-05-06 18:12:18 +02:00
|
|
|
* through complete recalculation. Note that a parameter change forces a
|
2006-02-28 05:10:28 +01:00
|
|
|
* full recalculation in any case.)
|
|
|
|
*
|
|
|
|
* BACKWARD indicates that the plan node must respect the es_direction flag.
|
|
|
|
* When this is not passed, the plan node will only be run forwards.
|
|
|
|
*
|
|
|
|
* MARK indicates that the plan node must support Mark/Restore calls.
|
|
|
|
* When this is not passed, no Mark/Restore will occur.
|
2011-02-27 19:43:29 +01:00
|
|
|
*
|
|
|
|
* SKIP_TRIGGERS tells ExecutorStart/ExecutorFinish to skip calling
|
|
|
|
* AfterTriggerBeginQuery/AfterTriggerEndQuery. This does not necessarily
|
|
|
|
* mean that the plan can't queue any AFTER triggers; just that the caller
|
|
|
|
* is responsible for there being a trigger context for them to be queued in.
|
Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements. The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.
In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs. Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.
Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.
Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.
Andres Freund and Tom Lane
2012-03-20 02:37:19 +01:00
|
|
|
*
|
|
|
|
* WITH/WITHOUT_OIDS tell the executor to emit tuples with or without space
|
2014-05-06 18:12:18 +02:00
|
|
|
* for OIDs, respectively. These are currently used only for CREATE TABLE AS.
|
Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements. The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.
In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs. Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.
Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.
Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.
Andres Freund and Tom Lane
2012-03-20 02:37:19 +01:00
|
|
|
* If neither is set, the plan may or may not produce tuples including OIDs.
|
2006-02-28 05:10:28 +01:00
|
|
|
*/
|
2006-10-04 02:30:14 +02:00
|
|
|
#define EXEC_FLAG_EXPLAIN_ONLY 0x0001 /* EXPLAIN, no ANALYZE */
|
|
|
|
#define EXEC_FLAG_REWIND 0x0002 /* need efficient rescan */
|
|
|
|
#define EXEC_FLAG_BACKWARD 0x0004 /* need backward scan */
|
|
|
|
#define EXEC_FLAG_MARK 0x0008 /* need mark/restore */
|
2011-04-10 17:42:00 +02:00
|
|
|
#define EXEC_FLAG_SKIP_TRIGGERS 0x0010 /* skip AfterTrigger calls */
|
Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements. The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.
In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs. Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.
Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.
Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.
Andres Freund and Tom Lane
2012-03-20 02:37:19 +01:00
|
|
|
#define EXEC_FLAG_WITH_OIDS 0x0020 /* force OIDs in returned tuples */
|
|
|
|
#define EXEC_FLAG_WITHOUT_OIDS 0x0040 /* force no OIDs in returned tuples */
|
2013-03-04 01:23:31 +01:00
|
|
|
#define EXEC_FLAG_WITH_NO_DATA 0x0080 /* rel scannability doesn't matter */
|
2006-02-28 05:10:28 +01:00
|
|
|
|
|
|
|
|
2004-03-17 02:02:24 +01:00
|
|
|
/*
|
|
|
|
* ExecEvalExpr was formerly a function containing a switch statement;
|
|
|
|
* now it's just a macro invoking the function pointed to by an ExprState
|
|
|
|
* node. Beware of double evaluation of the ExprState argument!
|
|
|
|
*/
|
2017-01-19 23:12:38 +01:00
|
|
|
#define ExecEvalExpr(expr, econtext, isNull) \
|
|
|
|
((*(expr)->evalfunc) (expr, econtext, isNull))
|
2004-03-17 02:02:24 +01:00
|
|
|
|
|
|
|
|
2008-11-19 02:10:24 +01:00
|
|
|
/* Hook for plugins to get control in ExecutorStart() */
|
|
|
|
typedef void (*ExecutorStart_hook_type) (QueryDesc *queryDesc, int eflags);
|
|
|
|
extern PGDLLIMPORT ExecutorStart_hook_type ExecutorStart_hook;
|
|
|
|
|
2008-07-18 20:23:47 +02:00
|
|
|
/* Hook for plugins to get control in ExecutorRun() */
|
2008-10-31 22:07:55 +01:00
|
|
|
typedef void (*ExecutorRun_hook_type) (QueryDesc *queryDesc,
|
2009-06-11 16:49:15 +02:00
|
|
|
ScanDirection direction,
|
Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
2016-03-12 22:05:10 +01:00
|
|
|
uint64 count);
|
2008-07-18 20:23:47 +02:00
|
|
|
extern PGDLLIMPORT ExecutorRun_hook_type ExecutorRun_hook;
|
|
|
|
|
2011-02-27 19:43:29 +01:00
|
|
|
/* Hook for plugins to get control in ExecutorFinish() */
|
|
|
|
typedef void (*ExecutorFinish_hook_type) (QueryDesc *queryDesc);
|
|
|
|
extern PGDLLIMPORT ExecutorFinish_hook_type ExecutorFinish_hook;
|
|
|
|
|
2008-11-19 02:10:24 +01:00
|
|
|
/* Hook for plugins to get control in ExecutorEnd() */
|
|
|
|
typedef void (*ExecutorEnd_hook_type) (QueryDesc *queryDesc);
|
|
|
|
extern PGDLLIMPORT ExecutorEnd_hook_type ExecutorEnd_hook;
|
|
|
|
|
2010-07-09 16:06:01 +02:00
|
|
|
/* Hook for plugins to get control in ExecCheckRTPerms() */
|
2010-07-22 02:47:59 +02:00
|
|
|
typedef bool (*ExecutorCheckPerms_hook_type) (List *, bool);
|
2010-07-09 16:06:01 +02:00
|
|
|
extern PGDLLIMPORT ExecutorCheckPerms_hook_type ExecutorCheckPerms_hook;
|
|
|
|
|
2008-07-18 20:23:47 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
|
|
|
* prototypes from functions in execAmi.c
|
|
|
|
*/
|
2014-11-21 00:36:07 +01:00
|
|
|
struct Path; /* avoid including relation.h here */
|
|
|
|
|
2010-07-12 19:01:06 +02:00
|
|
|
extern void ExecReScan(PlanState *node);
|
2003-08-08 23:42:59 +02:00
|
|
|
extern void ExecMarkPos(PlanState *node);
|
|
|
|
extern void ExecRestrPos(PlanState *node);
|
2014-11-21 00:36:07 +01:00
|
|
|
extern bool ExecSupportsMarkRestore(struct Path *pathnode);
|
2003-03-10 04:53:52 +01:00
|
|
|
extern bool ExecSupportsBackwardScan(Plan *node);
|
2009-09-13 00:12:09 +02:00
|
|
|
extern bool ExecMaterializesOutput(NodeTag plantype);
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2007-06-11 03:16:30 +02:00
|
|
|
/*
|
|
|
|
* prototypes from functions in execCurrent.c
|
|
|
|
*/
|
2007-11-15 23:25:18 +01:00
|
|
|
extern bool execCurrentOf(CurrentOfExpr *cexpr,
|
2007-11-15 22:14:46 +01:00
|
|
|
ExprContext *econtext,
|
|
|
|
Oid table_oid,
|
|
|
|
ItemPointer current_tid);
|
2007-06-11 03:16:30 +02:00
|
|
|
|
2003-01-11 00:54:24 +01:00
|
|
|
/*
|
|
|
|
* prototypes from functions in execGrouping.c
|
|
|
|
*/
|
2005-03-16 22:38:10 +01:00
|
|
|
extern bool execTuplesMatch(TupleTableSlot *slot1,
|
2005-10-15 04:49:52 +02:00
|
|
|
TupleTableSlot *slot2,
|
|
|
|
int numCols,
|
|
|
|
AttrNumber *matchColIdx,
|
|
|
|
FmgrInfo *eqfunctions,
|
|
|
|
MemoryContext evalContext);
|
2005-03-16 22:38:10 +01:00
|
|
|
extern bool execTuplesUnequal(TupleTableSlot *slot1,
|
2005-10-15 04:49:52 +02:00
|
|
|
TupleTableSlot *slot2,
|
|
|
|
int numCols,
|
|
|
|
AttrNumber *matchColIdx,
|
|
|
|
FmgrInfo *eqfunctions,
|
|
|
|
MemoryContext evalContext);
|
2007-01-10 19:06:05 +01:00
|
|
|
extern FmgrInfo *execTuplesMatchPrepare(int numCols,
|
|
|
|
Oid *eqOperators);
|
|
|
|
extern void execTuplesHashPrepare(int numCols,
|
|
|
|
Oid *eqOperators,
|
|
|
|
FmgrInfo **eqFunctions,
|
|
|
|
FmgrInfo **hashFunctions);
|
2003-01-11 00:54:24 +01:00
|
|
|
extern TupleHashTable BuildTupleHashTable(int numCols, AttrNumber *keyColIdx,
|
2003-08-04 02:43:34 +02:00
|
|
|
FmgrInfo *eqfunctions,
|
|
|
|
FmgrInfo *hashfunctions,
|
2016-10-15 02:22:51 +02:00
|
|
|
long nbuckets, Size additionalsize,
|
2003-08-04 02:43:34 +02:00
|
|
|
MemoryContext tablecxt,
|
2016-12-16 16:03:08 +01:00
|
|
|
MemoryContext tempcxt, bool use_variable_hash_iv);
|
2003-01-11 00:54:24 +01:00
|
|
|
extern TupleHashEntry LookupTupleHashEntry(TupleHashTable hashtable,
|
2003-08-04 02:43:34 +02:00
|
|
|
TupleTableSlot *slot,
|
|
|
|
bool *isnew);
|
2007-02-06 03:59:15 +01:00
|
|
|
extern TupleHashEntry FindTupleHashEntry(TupleHashTable hashtable,
|
2007-11-15 22:14:46 +01:00
|
|
|
TupleTableSlot *slot,
|
|
|
|
FmgrInfo *eqfunctions,
|
|
|
|
FmgrInfo *hashfunctions);
|
2003-01-11 00:54:24 +01:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
|
|
|
* prototypes from functions in execJunk.c
|
|
|
|
*/
|
2004-10-07 20:38:51 +02:00
|
|
|
extern JunkFilter *ExecInitJunkFilter(List *targetList, bool hasoid,
|
2001-10-25 07:50:21 +02:00
|
|
|
TupleTableSlot *slot);
|
2004-10-07 20:38:51 +02:00
|
|
|
extern JunkFilter *ExecInitJunkFilterConversion(List *targetList,
|
2005-10-15 04:49:52 +02:00
|
|
|
TupleDesc cleanTupType,
|
|
|
|
TupleTableSlot *slot);
|
2006-12-04 03:06:55 +01:00
|
|
|
extern AttrNumber ExecFindJunkAttribute(JunkFilter *junkfilter,
|
2007-11-15 22:14:46 +01:00
|
|
|
const char *attrName);
|
2011-01-13 02:47:02 +01:00
|
|
|
extern AttrNumber ExecFindJunkAttributeInTlist(List *targetlist,
|
2011-04-10 17:42:00 +02:00
|
|
|
const char *attrName);
|
2006-12-04 03:06:55 +01:00
|
|
|
extern Datum ExecGetJunkAttribute(TupleTableSlot *slot, AttrNumber attno,
|
2007-11-15 22:14:46 +01:00
|
|
|
bool *isNull);
|
2005-03-16 22:38:10 +01:00
|
|
|
extern TupleTableSlot *ExecFilterJunk(JunkFilter *junkfilter,
|
2005-10-15 04:49:52 +02:00
|
|
|
TupleTableSlot *slot);
|
1996-07-09 08:22:35 +02:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* prototypes from functions in execMain.c
|
|
|
|
*/
|
2006-02-28 05:10:28 +01:00
|
|
|
extern void ExecutorStart(QueryDesc *queryDesc, int eflags);
|
2008-11-19 02:10:24 +01:00
|
|
|
extern void standard_ExecutorStart(QueryDesc *queryDesc, int eflags);
|
2008-10-31 22:07:55 +01:00
|
|
|
extern void ExecutorRun(QueryDesc *queryDesc,
|
Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
2016-03-12 22:05:10 +01:00
|
|
|
ScanDirection direction, uint64 count);
|
2008-10-31 22:07:55 +01:00
|
|
|
extern void standard_ExecutorRun(QueryDesc *queryDesc,
|
Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
2016-03-12 22:05:10 +01:00
|
|
|
ScanDirection direction, uint64 count);
|
2011-02-27 19:43:29 +01:00
|
|
|
extern void ExecutorFinish(QueryDesc *queryDesc);
|
|
|
|
extern void standard_ExecutorFinish(QueryDesc *queryDesc);
|
2002-12-05 16:50:39 +01:00
|
|
|
extern void ExecutorEnd(QueryDesc *queryDesc);
|
2008-11-19 02:10:24 +01:00
|
|
|
extern void standard_ExecutorEnd(QueryDesc *queryDesc);
|
2003-03-11 20:40:24 +01:00
|
|
|
extern void ExecutorRewind(QueryDesc *queryDesc);
|
2010-07-22 02:47:59 +02:00
|
|
|
extern bool ExecCheckRTPerms(List *rangeTable, bool ereport_on_violation);
|
2011-02-26 00:56:23 +01:00
|
|
|
extern void CheckValidResultRel(Relation resultRel, CmdType operation);
|
2008-03-28 01:21:56 +01:00
|
|
|
extern void InitResultRelInfo(ResultRelInfo *resultRelInfo,
|
|
|
|
Relation resultRelationDesc,
|
|
|
|
Index resultRelationIndex,
|
2017-01-04 20:36:34 +01:00
|
|
|
Relation partition_root,
|
2009-12-15 05:57:48 +01:00
|
|
|
int instrument_options);
|
2007-08-15 23:39:50 +02:00
|
|
|
extern ResultRelInfo *ExecGetTriggerResultRel(EState *estate, Oid relid);
|
2004-01-22 03:23:21 +01:00
|
|
|
extern bool ExecContextForcesOids(PlanState *planstate, bool *hasoids);
|
2003-07-21 19:05:12 +02:00
|
|
|
extern void ExecConstraints(ResultRelInfo *resultRelInfo,
|
2017-01-04 20:36:34 +01:00
|
|
|
TupleTableSlot *slot, TupleTableSlot *orig_slot,
|
|
|
|
EState *estate);
|
2015-04-25 02:34:26 +02:00
|
|
|
extern void ExecWithCheckOptions(WCOKind kind, ResultRelInfo *resultRelInfo,
|
2013-07-18 23:10:16 +02:00
|
|
|
TupleTableSlot *slot, EState *estate);
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
extern LockTupleMode ExecUpdateLockMode(EState *estate, ResultRelInfo *relinfo);
|
Add support for doing late row locking in FDWs.
Previously, FDWs could only do "early row locking", that is lock a row as
soon as it's fetched, even though local restriction/join conditions might
discard the row later. This patch adds callbacks that allow FDWs to do
late locking in the same way that it's done for regular tables.
To make use of this feature, an FDW must support the "ctid" column as a
unique row identifier. Currently, since ctid has to be of type TID,
the feature is of limited use, though in principle it could be used by
postgres_fdw. We may eventually allow FDWs to specify another data type
for ctid, which would make it possible for more FDWs to use this feature.
This commit does not modify postgres_fdw to use late locking. We've
tested some prototype code for that, but it's not in committable shape,
and besides it's quite unclear whether it actually makes sense to do late
locking against a remote server. The extra round trips required are likely
to outweigh any benefit from improved concurrency.
Etsuro Fujita, reviewed by Ashutosh Bapat, and hacked up a lot by me
2015-05-12 20:10:10 +02:00
|
|
|
extern ExecRowMark *ExecFindRowMark(EState *estate, Index rti, bool missing_ok);
|
2011-01-13 02:47:02 +01:00
|
|
|
extern ExecAuxRowMark *ExecBuildAuxRowMark(ExecRowMark *erm, List *targetlist);
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
extern TupleTableSlot *EvalPlanQual(EState *estate, EPQState *epqstate,
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
Relation relation, Index rti, int lockmode,
|
2007-11-30 22:22:54 +01:00
|
|
|
ItemPointer tid, TransactionId priorXmax);
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
extern HeapTuple EvalPlanQualFetch(EState *estate, Relation relation,
|
2014-10-07 22:23:34 +02:00
|
|
|
int lockmode, LockWaitPolicy wait_policy, ItemPointer tid,
|
|
|
|
TransactionId priorXmax);
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
extern void EvalPlanQualInit(EPQState *epqstate, EState *estate,
|
2011-01-13 02:47:02 +01:00
|
|
|
Plan *subplan, List *auxrowmarks, int epqParam);
|
|
|
|
extern void EvalPlanQualSetPlan(EPQState *epqstate,
|
2011-04-10 17:42:00 +02:00
|
|
|
Plan *subplan, List *auxrowmarks);
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
extern void EvalPlanQualSetTuple(EPQState *epqstate, Index rti,
|
2010-02-26 03:01:40 +01:00
|
|
|
HeapTuple tuple);
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
extern HeapTuple EvalPlanQualGetTuple(EPQState *epqstate, Index rti);
|
2016-12-21 17:36:10 +01:00
|
|
|
extern void ExecSetupPartitionTupleRouting(Relation rel,
|
|
|
|
PartitionDispatch **pd,
|
|
|
|
ResultRelInfo **partitions,
|
|
|
|
TupleConversionMap ***tup_conv_maps,
|
2017-01-04 19:05:29 +01:00
|
|
|
TupleTableSlot **partition_tuple_slot,
|
2016-12-21 17:36:10 +01:00
|
|
|
int *num_parted, int *num_partitions);
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
extern int ExecFindPartition(ResultRelInfo *resultRelInfo,
|
|
|
|
PartitionDispatch *pd,
|
|
|
|
TupleTableSlot *slot,
|
|
|
|
EState *estate);
|
2010-02-26 03:01:40 +01:00
|
|
|
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
#define EvalPlanQualSetSlot(epqstate, slot) ((epqstate)->origslot = (slot))
|
|
|
|
extern void EvalPlanQualFetchRowMarks(EPQState *epqstate);
|
|
|
|
extern TupleTableSlot *EvalPlanQualNext(EPQState *epqstate);
|
|
|
|
extern void EvalPlanQualBegin(EPQState *epqstate, EState *parentestate);
|
|
|
|
extern void EvalPlanQualEnd(EPQState *epqstate);
|
1999-05-25 18:15:34 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
|
|
|
* prototypes from functions in execProcnode.c
|
|
|
|
*/
|
2006-02-28 05:10:28 +01:00
|
|
|
extern PlanState *ExecInitNode(Plan *node, EState *estate, int eflags);
|
2003-08-08 23:42:59 +02:00
|
|
|
extern TupleTableSlot *ExecProcNode(PlanState *node);
|
2005-04-16 22:07:35 +02:00
|
|
|
extern Node *MultiExecProcNode(PlanState *node);
|
2003-08-08 23:42:59 +02:00
|
|
|
extern void ExecEndNode(PlanState *node);
|
Add a Gather executor node.
A Gather executor node runs any number of copies of a plan in an equal
number of workers and merges all of the results into a single tuple
stream. It can also run the plan itself, if the workers are
unavailable or haven't started up yet. It is intended to work with
the Partial Seq Scan node which will be added in future commits.
It could also be used to implement parallel query of a different sort
by itself, without help from Partial Seq Scan, if the single_copy mode
is used. In that mode, a worker executes the plan, and the parallel
leader does not, merely collecting the worker's results. So, a Gather
node could be inserted into a plan to split the execution of that plan
across two processes. Nested Gather nodes aren't currently supported,
but we might want to add support for that in the future.
There's nothing in the planner to actually generate Gather nodes yet,
so it's not quite time to break out the champagne. But we're getting
close.
Amit Kapila. Some designs suggestions were provided by me, and I also
reviewed the patch. Single-copy mode, documentation, and other minor
changes also by me.
2015-10-01 01:23:36 +02:00
|
|
|
extern bool ExecShutdownNode(PlanState *node);
|
1996-07-09 08:22:35 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* prototypes from functions in execQual.c
|
|
|
|
*/
|
2004-04-01 23:28:47 +02:00
|
|
|
extern Datum GetAttributeByNum(HeapTupleHeader tuple, AttrNumber attrno,
|
2001-03-22 05:01:46 +01:00
|
|
|
bool *isNull);
|
2004-04-01 23:28:47 +02:00
|
|
|
extern Datum GetAttributeByName(HeapTupleHeader tuple, const char *attname,
|
2001-03-22 05:01:46 +01:00
|
|
|
bool *isNull);
|
2003-08-08 23:42:59 +02:00
|
|
|
extern Tuplestorestate *ExecMakeTableFunctionResult(ExprState *funcexpr,
|
2002-09-04 22:31:48 +02:00
|
|
|
ExprContext *econtext,
|
2014-06-20 04:13:41 +02:00
|
|
|
MemoryContext argContext,
|
2008-10-29 01:00:39 +01:00
|
|
|
TupleDesc expectedDesc,
|
|
|
|
bool randomAccess);
|
Move targetlist SRF handling from expression evaluation to new executor node.
Evaluation of set returning functions (SRFs_ in the targetlist (like SELECT
generate_series(1,5)) so far was done in the expression evaluation (i.e.
ExecEvalExpr()) and projection (i.e. ExecProject/ExecTargetList) code.
This meant that most executor nodes performing projection, and most
expression evaluation functions, had to deal with the possibility that an
evaluated expression could return a set of return values.
That's bad because it leads to repeated code in a lot of places. It also,
and that's my (Andres's) motivation, made it a lot harder to implement a
more efficient way of doing expression evaluation.
To fix this, introduce a new executor node (ProjectSet) that can evaluate
targetlists containing one or more SRFs. To avoid the complexity of the old
way of handling nested expressions returning sets (e.g. having to pass up
ExprDoneCond, and dealing with arguments to functions returning sets etc.),
those SRFs can only be at the top level of the node's targetlist. The
planner makes sure (via split_pathtarget_at_srfs()) that SRF evaluation is
only necessary in ProjectSet nodes and that SRFs are only present at the
top level of the node's targetlist. If there are nested SRFs the planner
creates multiple stacked ProjectSet nodes. The ProjectSet nodes always get
input from an underlying node.
We also discussed and prototyped evaluating targetlist SRFs using ROWS
FROM(), but that turned out to be more complicated than we'd hoped.
While moving SRF evaluation to ProjectSet would allow to retain the old
"least common multiple" behavior when multiple SRFs are present in one
targetlist (i.e. continue returning rows until all SRFs are at the end of
their input at the same time), we decided to instead only return rows till
all SRFs are exhausted, returning NULL for already exhausted ones. We
deemed the previous behavior to be too confusing, unexpected and actually
not particularly useful.
As a side effect, the previously prohibited case of multiple set returning
arguments to a function, is now allowed. Not because it's particularly
desirable, but because it ends up working and there seems to be no argument
for adding code to prohibit it.
Currently the behavior for COALESCE and CASE containing SRFs has changed,
returning multiple rows from the expression, even when the SRF containing
"arm" of the expression is not evaluated. That's because the SRFs are
evaluated in a separate ProjectSet node. As that's quite confusing, we're
likely to instead prohibit SRFs in those places. But that's still being
discussed, and the code would reside in places not touched here, so that's
a task for later.
There's a lot of, now superfluous, code dealing with set return expressions
around. But as the changes to get rid of those are verbose largely boring,
it seems better for readability to keep the cleanup as a separate commit.
Author: Tom Lane and Andres Freund
Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
2017-01-18 21:46:50 +01:00
|
|
|
extern Datum ExecMakeFunctionResultSet(FuncExprState *fcache,
|
|
|
|
ExprContext *econtext,
|
|
|
|
bool *isNull,
|
|
|
|
ExprDoneCond *isDone);
|
2003-08-08 23:42:59 +02:00
|
|
|
extern Datum ExecEvalExprSwitchContext(ExprState *expression, ExprContext *econtext,
|
2017-01-19 23:12:38 +01:00
|
|
|
bool *isNull);
|
2003-08-08 23:42:59 +02:00
|
|
|
extern ExprState *ExecInitExpr(Expr *node, PlanState *parent);
|
2002-12-15 17:17:59 +01:00
|
|
|
extern ExprState *ExecPrepareExpr(Expr *node, EState *estate);
|
2000-01-20 00:55:03 +01:00
|
|
|
extern bool ExecQual(List *qual, ExprContext *econtext, bool resultForNull);
|
1997-09-08 23:56:23 +02:00
|
|
|
extern int ExecTargetListLength(List *targetlist);
|
2000-08-21 22:55:31 +02:00
|
|
|
extern int ExecCleanTargetListLength(List *targetlist);
|
2017-01-19 23:12:38 +01:00
|
|
|
extern TupleTableSlot *ExecProject(ProjectionInfo *projInfo);
|
1996-07-09 08:22:35 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* prototypes from functions in execScan.c
|
|
|
|
*/
|
2003-08-08 23:42:59 +02:00
|
|
|
typedef TupleTableSlot *(*ExecScanAccessMtd) (ScanState *node);
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
typedef bool (*ExecScanRecheckMtd) (ScanState *node, TupleTableSlot *slot);
|
2000-07-12 04:37:39 +02:00
|
|
|
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
extern TupleTableSlot *ExecScan(ScanState *node, ExecScanAccessMtd accessMtd,
|
2010-02-26 03:01:40 +01:00
|
|
|
ExecScanRecheckMtd recheckMtd);
|
2003-08-08 23:42:59 +02:00
|
|
|
extern void ExecAssignScanProjectionInfo(ScanState *node);
|
Code review for foreign/custom join pushdown patch.
Commit e7cb7ee14555cc9c5773e2c102efd6371f6f2005 included some design
decisions that seem pretty questionable to me, and there was quite a lot
of stuff not to like about the documentation and comments. Clean up
as follows:
* Consider foreign joins only between foreign tables on the same server,
rather than between any two foreign tables with the same underlying FDW
handler function. In most if not all cases, the FDW would simply have had
to apply the same-server restriction itself (far more expensively, both for
lack of caching and because it would be repeated for each combination of
input sub-joins), or else risk nasty bugs. Anyone who's really intent on
doing something outside this restriction can always use the
set_join_pathlist_hook.
* Rename fdw_ps_tlist/custom_ps_tlist to fdw_scan_tlist/custom_scan_tlist
to better reflect what they're for, and allow these custom scan tlists
to be used even for base relations.
* Change make_foreignscan() API to include passing the fdw_scan_tlist
value, since the FDW is required to set that. Backwards compatibility
doesn't seem like an adequate reason to expect FDWs to set it in some
ad-hoc extra step, and anyway existing FDWs can just pass NIL.
* Change the API of path-generating subroutines of add_paths_to_joinrel,
and in particular that of GetForeignJoinPaths and set_join_pathlist_hook,
so that various less-used parameters are passed in a struct rather than
as separate parameter-list entries. The objective here is to reduce the
probability that future additions to those parameter lists will result in
source-level API breaks for users of these hooks. It's possible that this
is even a small win for the core code, since most CPU architectures can't
pass more than half a dozen parameters efficiently anyway. I kept root,
joinrel, outerrel, innerrel, and jointype as separate parameters to reduce
code churn in joinpath.c --- in particular, putting jointype into the
struct would have been problematic because of the subroutines' habit of
changing their local copies of that variable.
* Avoid ad-hocery in ExecAssignScanProjectionInfo. It was probably all
right for it to know about IndexOnlyScan, but if the list is to grow
we should refactor the knowledge out to the callers.
* Restore nodeForeignscan.c's previous use of the relcache to avoid
extra GetFdwRoutine lookups for base-relation scans.
* Lots of cleanup of documentation and missed comments. Re-order some
code additions into more logical places.
2015-05-10 20:36:30 +02:00
|
|
|
extern void ExecAssignScanProjectionInfoWithVarno(ScanState *node, Index varno);
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
extern void ExecScanReScan(ScanState *node);
|
1996-07-09 08:22:35 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* prototypes from functions in execTuples.c
|
|
|
|
*/
|
2003-08-08 23:42:59 +02:00
|
|
|
extern void ExecInitResultTupleSlot(EState *estate, PlanState *planstate);
|
|
|
|
extern void ExecInitScanTupleSlot(EState *estate, ScanState *scanstate);
|
2000-09-12 23:07:18 +02:00
|
|
|
extern TupleTableSlot *ExecInitExtraTupleSlot(EState *estate);
|
|
|
|
extern TupleTableSlot *ExecInitNullTupleSlot(EState *estate,
|
2001-03-22 05:01:46 +01:00
|
|
|
TupleDesc tupType);
|
2002-09-02 03:05:06 +02:00
|
|
|
extern TupleDesc ExecTypeFromTL(List *targetList, bool hasoid);
|
2003-05-06 22:26:28 +02:00
|
|
|
extern TupleDesc ExecCleanTypeFromTL(List *targetList, bool hasoid);
|
Ensure that RowExprs and whole-row Vars produce the expected column names.
At one time it wasn't terribly important what column names were associated
with the fields of a composite Datum, but since the introduction of
operations like row_to_json(), it's important that looking up the rowtype
ID embedded in the Datum returns the column names that users would expect.
That did not work terribly well before this patch: you could get the column
names of the underlying table, or column aliases from any level of the
query, depending on minor details of the plan tree. You could even get
totally empty field names, which is disastrous for cases like row_to_json().
To fix this for whole-row Vars, look to the RTE referenced by the Var, and
make sure its column aliases are applied to the rowtype associated with
the result Datums. This is a tad scary because we might have to return
a transient RECORD type even though the Var is declared as having some
named rowtype. In principle it should be all right because the record
type will still be physically compatible with the named rowtype; but
I had to weaken one Assert in ExecEvalConvertRowtype, and there might be
third-party code containing similar assumptions.
Similarly, RowExprs have to be willing to override the column names coming
from a named composite result type and produce a RECORD when the column
aliases visible at the site of the RowExpr differ from the underlying
table's column names.
In passing, revert the decision made in commit 398f70ec070fe601 to add
an alias-list argument to ExecTypeFromExprList: better to provide that
functionality in a separate function. This also reverts most of the code
changes in d68581483564ec0f, which we don't need because we're no longer
depending on the tupdesc found in the child plan node's result slot to be
blessed.
Back-patch to 9.4, but not earlier, since this solution changes the results
in some cases that users might not have realized were buggy. We'll apply a
more restricted form of this patch in older branches.
2014-11-10 21:21:09 +01:00
|
|
|
extern TupleDesc ExecTypeFromExprList(List *exprList);
|
|
|
|
extern void ExecTypeSetColNames(TupleDesc typeInfo, List *namesList);
|
2003-08-08 23:42:59 +02:00
|
|
|
extern void UpdateChangedParamSet(PlanState *node, Bitmapset *newchg);
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2002-07-20 07:49:28 +02:00
|
|
|
typedef struct TupOutputState
|
|
|
|
{
|
2005-03-16 22:38:10 +01:00
|
|
|
TupleTableSlot *slot;
|
2003-05-06 22:26:28 +02:00
|
|
|
DestReceiver *dest;
|
2002-07-20 07:49:28 +02:00
|
|
|
} TupOutputState;
|
|
|
|
|
2003-05-06 22:26:28 +02:00
|
|
|
extern TupOutputState *begin_tup_output_tupdesc(DestReceiver *dest,
|
2003-08-04 02:43:34 +02:00
|
|
|
TupleDesc tupdesc);
|
2009-07-22 19:00:23 +02:00
|
|
|
extern void do_tup_output(TupOutputState *tstate, Datum *values, bool *isnull);
|
2016-05-23 20:16:40 +02:00
|
|
|
extern void do_text_output_multiline(TupOutputState *tstate, const char *txt);
|
2002-07-20 07:49:28 +02:00
|
|
|
extern void end_tup_output(TupOutputState *tstate);
|
|
|
|
|
2002-08-29 02:17:06 +02:00
|
|
|
/*
|
|
|
|
* Write a single line of text given as a C string.
|
|
|
|
*
|
|
|
|
* Should only be used with a single-TEXT-attribute tupdesc.
|
|
|
|
*/
|
2009-07-22 19:00:23 +02:00
|
|
|
#define do_text_output_oneline(tstate, str_to_emit) \
|
2002-07-20 07:49:28 +02:00
|
|
|
do { \
|
2009-07-22 19:00:23 +02:00
|
|
|
Datum values_[1]; \
|
|
|
|
bool isnull_[1]; \
|
|
|
|
values_[0] = PointerGetDatum(cstring_to_text(str_to_emit)); \
|
|
|
|
isnull_[0] = false; \
|
|
|
|
do_tup_output(tstate, values_, isnull_); \
|
|
|
|
pfree(DatumGetPointer(values_[0])); \
|
2002-07-20 07:49:28 +02:00
|
|
|
} while (0)
|
|
|
|
|
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
1999-04-16 23:27:23 +02:00
|
|
|
* prototypes from functions in execUtils.c
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
2002-12-15 17:17:59 +01:00
|
|
|
extern EState *CreateExecutorState(void);
|
|
|
|
extern void FreeExecutorState(EState *estate);
|
|
|
|
extern ExprContext *CreateExprContext(EState *estate);
|
2006-08-04 23:33:36 +02:00
|
|
|
extern ExprContext *CreateStandaloneExprContext(void);
|
2009-07-18 21:15:42 +02:00
|
|
|
extern void FreeExprContext(ExprContext *econtext, bool isCommit);
|
2003-12-18 21:21:37 +01:00
|
|
|
extern void ReScanExprContext(ExprContext *econtext);
|
2000-07-12 04:37:39 +02:00
|
|
|
|
|
|
|
#define ResetExprContext(econtext) \
|
|
|
|
MemoryContextReset((econtext)->ecxt_per_tuple_memory)
|
|
|
|
|
2001-01-22 01:50:07 +01:00
|
|
|
extern ExprContext *MakePerTupleExprContext(EState *estate);
|
|
|
|
|
|
|
|
/* Get an EState's per-output-tuple exprcontext, making it if first use */
|
|
|
|
#define GetPerTupleExprContext(estate) \
|
|
|
|
((estate)->es_per_tuple_exprcontext ? \
|
|
|
|
(estate)->es_per_tuple_exprcontext : \
|
|
|
|
MakePerTupleExprContext(estate))
|
|
|
|
|
|
|
|
#define GetPerTupleMemoryContext(estate) \
|
|
|
|
(GetPerTupleExprContext(estate)->ecxt_per_tuple_memory)
|
|
|
|
|
|
|
|
/* Reset an EState's per-output-tuple exprcontext, if one's been created */
|
|
|
|
#define ResetPerTupleExprContext(estate) \
|
|
|
|
do { \
|
|
|
|
if ((estate)->es_per_tuple_exprcontext) \
|
|
|
|
ResetExprContext((estate)->es_per_tuple_exprcontext); \
|
|
|
|
} while (0)
|
|
|
|
|
2003-08-08 23:42:59 +02:00
|
|
|
extern void ExecAssignExprContext(EState *estate, PlanState *planstate);
|
2006-06-16 20:42:24 +02:00
|
|
|
extern void ExecAssignResultType(PlanState *planstate, TupleDesc tupDesc);
|
2003-08-08 23:42:59 +02:00
|
|
|
extern void ExecAssignResultTypeFromTL(PlanState *planstate);
|
|
|
|
extern TupleDesc ExecGetResultType(PlanState *planstate);
|
2003-01-12 05:03:34 +01:00
|
|
|
extern ProjectionInfo *ExecBuildProjectionInfo(List *targetList,
|
2003-08-04 02:43:34 +02:00
|
|
|
ExprContext *econtext,
|
2007-02-02 01:07:03 +01:00
|
|
|
TupleTableSlot *slot,
|
|
|
|
TupleDesc inputDesc);
|
|
|
|
extern void ExecAssignProjectionInfo(PlanState *planstate,
|
2007-11-15 22:14:46 +01:00
|
|
|
TupleDesc inputDesc);
|
2003-08-08 23:42:59 +02:00
|
|
|
extern void ExecFreeExprContext(PlanState *planstate);
|
2006-06-16 20:42:24 +02:00
|
|
|
extern void ExecAssignScanType(ScanState *scanstate, TupleDesc tupDesc);
|
2003-08-08 23:42:59 +02:00
|
|
|
extern void ExecAssignScanTypeFromOuterPlan(ScanState *scanstate);
|
2002-12-15 17:17:59 +01:00
|
|
|
|
2005-12-03 06:51:03 +01:00
|
|
|
extern bool ExecRelationIsTargetRelation(EState *estate, Index scanrelid);
|
|
|
|
|
2013-04-27 23:48:57 +02:00
|
|
|
extern Relation ExecOpenScanRelation(EState *estate, Index scanrelid, int eflags);
|
2005-12-02 21:03:42 +01:00
|
|
|
extern void ExecCloseScanRelation(Relation scanrel);
|
|
|
|
|
2015-04-24 08:33:23 +02:00
|
|
|
extern void RegisterExprContextCallback(ExprContext *econtext,
|
|
|
|
ExprContextCallbackFunction function,
|
|
|
|
Datum arg);
|
|
|
|
extern void UnregisterExprContextCallback(ExprContext *econtext,
|
|
|
|
ExprContextCallbackFunction function,
|
|
|
|
Datum arg);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* prototypes from functions in execIndexing.c
|
|
|
|
*/
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
|
2000-11-12 01:37:02 +01:00
|
|
|
extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
|
2009-07-29 22:56:21 +02:00
|
|
|
extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
EState *estate, bool noDupErr, bool *specConflict,
|
|
|
|
List *arbiterIndexes);
|
|
|
|
extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
|
2015-05-24 03:35:49 +02:00
|
|
|
ItemPointer conflictTid, List *arbiterIndexes);
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
extern void check_exclusion_constraint(Relation heap, Relation index,
|
2010-02-26 03:01:40 +01:00
|
|
|
IndexInfo *indexInfo,
|
|
|
|
ItemPointer tupleid,
|
|
|
|
Datum *values, bool *isnull,
|
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:31:36 +02:00
|
|
|
EState *estate, bool newIndex);
|
2001-10-28 07:26:15 +01:00
|
|
|
|
2002-05-12 22:10:05 +02:00
|
|
|
|
2001-11-05 18:46:40 +01:00
|
|
|
#endif /* EXECUTOR_H */
|