1996-07-09 08:22:35 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
1999-02-14 00:22:53 +01:00
|
|
|
* nodeIndexscan.c
|
2005-04-25 03:30:14 +02:00
|
|
|
* Routines to support indexed scans of relations
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
2024-01-04 02:49:05 +01:00
|
|
|
* Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
|
2000-01-26 06:58:53 +01:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/backend/executor/nodeIndexscan.c
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* INTERFACE ROUTINES
|
2011-10-11 20:20:06 +02:00
|
|
|
* ExecIndexScan scans a relation using an index
|
|
|
|
* IndexNext retrieve next tuple using index
|
2015-05-15 13:26:51 +02:00
|
|
|
* IndexNextWithReorder same, but recheck ORDER BY expressions
|
1996-07-09 08:22:35 +02:00
|
|
|
* ExecInitIndexScan creates and initializes state info.
|
2010-07-12 19:01:06 +02:00
|
|
|
* ExecReScanIndexScan rescans the indexed relation.
|
1996-07-09 08:22:35 +02:00
|
|
|
* ExecEndIndexScan releases all storage.
|
|
|
|
* ExecIndexMarkPos marks scan position.
|
|
|
|
* ExecIndexRestrPos restores scan position.
|
2017-02-15 19:53:24 +01:00
|
|
|
* ExecIndexScanEstimate estimates DSM space needed for parallel index scan
|
|
|
|
* ExecIndexScanInitializeDSM initialize DSM for parallel indexscan
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
* ExecIndexScanReInitializeDSM reinitialize DSM for fresh scan
|
2017-02-15 19:53:24 +01:00
|
|
|
* ExecIndexScanInitializeWorker attach to DSM info in parallel worker
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
1996-10-31 11:12:26 +01:00
|
|
|
#include "postgres.h"
|
|
|
|
|
2006-01-25 21:29:24 +01:00
|
|
|
#include "access/nbtree.h"
|
2008-06-19 02:46:06 +02:00
|
|
|
#include "access/relscan.h"
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
#include "access/tableam.h"
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
#include "catalog/pg_am.h"
|
2024-03-04 12:00:11 +01:00
|
|
|
#include "executor/executor.h"
|
1999-07-16 07:00:38 +02:00
|
|
|
#include "executor/nodeIndexscan.h"
|
2015-05-15 13:26:51 +02:00
|
|
|
#include "lib/pairingheap.h"
|
2017-07-26 02:37:17 +02:00
|
|
|
#include "miscadmin.h"
|
2015-05-22 01:47:48 +02:00
|
|
|
#include "nodes/nodeFuncs.h"
|
2011-09-04 07:13:16 +02:00
|
|
|
#include "utils/array.h"
|
2015-05-15 13:26:51 +02:00
|
|
|
#include "utils/datum.h"
|
2005-11-25 20:47:50 +01:00
|
|
|
#include "utils/lsyscache.h"
|
2011-02-23 18:18:09 +01:00
|
|
|
#include "utils/rel.h"
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2015-05-15 13:26:51 +02:00
|
|
|
/*
|
|
|
|
* When an ordering operator is used, tuples fetched from the index that
|
|
|
|
* need to be reordered are queued in a pairing heap, as ReorderTuples.
|
|
|
|
*/
|
|
|
|
typedef struct
|
|
|
|
{
|
|
|
|
pairingheap_node ph_node;
|
|
|
|
HeapTuple htup;
|
|
|
|
Datum *orderbyvals;
|
|
|
|
bool *orderbynulls;
|
|
|
|
} ReorderTuple;
|
2003-08-22 22:26:43 +02:00
|
|
|
|
2002-12-05 16:50:39 +01:00
|
|
|
static TupleTableSlot *IndexNext(IndexScanState *node);
|
2015-05-15 13:26:51 +02:00
|
|
|
static TupleTableSlot *IndexNextWithReorder(IndexScanState *node);
|
|
|
|
static void EvalOrderByExpressions(IndexScanState *node, ExprContext *econtext);
|
|
|
|
static bool IndexRecheck(IndexScanState *node, TupleTableSlot *slot);
|
|
|
|
static int cmp_orderbyvals(const Datum *adist, const bool *anulls,
|
|
|
|
const Datum *bdist, const bool *bnulls,
|
|
|
|
IndexScanState *node);
|
|
|
|
static int reorderqueue_cmp(const pairingheap_node *a,
|
|
|
|
const pairingheap_node *b, void *arg);
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
static void reorderqueue_push(IndexScanState *node, TupleTableSlot *slot,
|
2015-05-15 13:26:51 +02:00
|
|
|
Datum *orderbyvals, bool *orderbynulls);
|
|
|
|
static HeapTuple reorderqueue_pop(IndexScanState *node);
|
2003-08-22 22:26:43 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* IndexNext
|
|
|
|
*
|
|
|
|
* Retrieve a tuple from the IndexScan node's currentRelation
|
2005-04-25 03:30:14 +02:00
|
|
|
* using the index specified in the IndexScanState information.
|
1996-07-09 08:22:35 +02:00
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
static TupleTableSlot *
|
2002-12-05 16:50:39 +01:00
|
|
|
IndexNext(IndexScanState *node)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
|
|
|
EState *estate;
|
2000-07-12 04:37:39 +02:00
|
|
|
ExprContext *econtext;
|
1998-07-27 21:38:40 +02:00
|
|
|
ScanDirection direction;
|
1996-07-09 08:22:35 +02:00
|
|
|
IndexScanDesc scandesc;
|
|
|
|
TupleTableSlot *slot;
|
1999-05-25 18:15:34 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
|
|
|
* extract necessary information from index scan node
|
|
|
|
*/
|
2002-12-05 16:50:39 +01:00
|
|
|
estate = node->ss.ps.state;
|
2023-01-31 22:52:41 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Determine which direction to scan the index in based on the plan's scan
|
|
|
|
* direction and the current direction of execution.
|
|
|
|
*/
|
|
|
|
direction = ScanDirectionCombine(estate->es_direction,
|
|
|
|
((IndexScan *) node->ss.ps.plan)->indexorderdir);
|
2005-04-25 03:30:14 +02:00
|
|
|
scandesc = node->iss_ScanDesc;
|
2002-12-05 16:50:39 +01:00
|
|
|
econtext = node->ss.ps.ps_ExprContext;
|
|
|
|
slot = node->ss.ss_ScanTupleSlot;
|
1999-01-29 10:23:17 +01:00
|
|
|
|
2017-03-08 14:15:24 +01:00
|
|
|
if (scandesc == NULL)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We reach here if the index scan is not parallel, or if we're
|
2018-07-19 15:08:09 +02:00
|
|
|
* serially executing an index scan that was planned to be parallel.
|
2017-03-08 14:15:24 +01:00
|
|
|
*/
|
|
|
|
scandesc = index_beginscan(node->ss.ss_currentRelation,
|
|
|
|
node->iss_RelationDesc,
|
|
|
|
estate->es_snapshot,
|
|
|
|
node->iss_NumScanKeys,
|
|
|
|
node->iss_NumOrderByKeys);
|
|
|
|
|
|
|
|
node->iss_ScanDesc = scandesc;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If no run-time keys to calculate or they are ready, go ahead and
|
|
|
|
* pass the scankeys to the index AM.
|
|
|
|
*/
|
|
|
|
if (node->iss_NumRuntimeKeys == 0 || node->iss_RuntimeKeysReady)
|
|
|
|
index_rescan(scandesc,
|
|
|
|
node->iss_ScanKeys, node->iss_NumScanKeys,
|
|
|
|
node->iss_OrderByKeys, node->iss_NumOrderByKeys);
|
|
|
|
}
|
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2011-10-11 20:20:06 +02:00
|
|
|
* ok, now that we have what we need, fetch the next tuple.
|
There's a patch attached to fix gcc 2.8.x warnings, except for the
yyerror ones from bison. It also includes a few 'enhancements' to
the C programming style (which are, of course, personal).
The other patch removes the compilation of backend/lib/qsort.c, as
qsort() is a standard function in stdlib.h and can be used any
where else (and it is). It was only used in
backend/optimizer/geqo/geqo_pool.c, backend/optimizer/path/predmig.c,
and backend/storage/page/bufpage.c
> > Some or all of these changes might not be appropriate for v6.3,
since we > > are in beta testing and since they do not affect the
current functionality. > > For those cases, how about submitting
patches based on the final v6.3 > > release?
There's more to come. Please review these patches. I ran the
regression tests and they only failed where this was expected
(random, geo, etc).
Cheers,
Jeroen
1998-03-30 18:47:35 +02:00
|
|
|
*/
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
while (index_getnext_slot(scandesc, direction, slot))
|
1999-04-13 19:18:29 +02:00
|
|
|
{
|
2017-07-26 02:37:17 +02:00
|
|
|
CHECK_FOR_INTERRUPTS();
|
|
|
|
|
2008-04-13 21:18:14 +02:00
|
|
|
/*
|
|
|
|
* If the index was lossy, we have to recheck the index quals using
|
2011-10-11 20:20:06 +02:00
|
|
|
* the fetched tuple.
|
2008-04-13 21:18:14 +02:00
|
|
|
*/
|
|
|
|
if (scandesc->xs_recheck)
|
|
|
|
{
|
|
|
|
econtext->ecxt_scantuple = slot;
|
2018-01-29 21:16:53 +01:00
|
|
|
if (!ExecQualAndReset(node->indexqualorig, econtext))
|
2011-09-22 17:29:18 +02:00
|
|
|
{
|
|
|
|
/* Fails recheck, so drop it and loop back for another */
|
|
|
|
InstrCountFiltered2(node, 1);
|
|
|
|
continue;
|
|
|
|
}
|
2008-04-13 21:18:14 +02:00
|
|
|
}
|
|
|
|
|
2005-04-25 03:30:14 +02:00
|
|
|
return slot;
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
2001-03-22 07:16:21 +01:00
|
|
|
|
2015-05-15 13:26:51 +02:00
|
|
|
/*
|
|
|
|
* if we get here it means the index scan failed so we are at the end of
|
|
|
|
* the scan..
|
|
|
|
*/
|
|
|
|
node->iss_ReachedEnd = true;
|
|
|
|
return ExecClearTuple(slot);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* IndexNextWithReorder
|
|
|
|
*
|
2015-05-18 09:38:52 +02:00
|
|
|
* Like IndexNext, but this version can also re-check ORDER BY
|
|
|
|
* expressions, and reorder the tuples as necessary.
|
2015-05-15 13:26:51 +02:00
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
static TupleTableSlot *
|
|
|
|
IndexNextWithReorder(IndexScanState *node)
|
|
|
|
{
|
2017-03-08 14:15:24 +01:00
|
|
|
EState *estate;
|
2015-05-15 13:26:51 +02:00
|
|
|
ExprContext *econtext;
|
|
|
|
IndexScanDesc scandesc;
|
|
|
|
TupleTableSlot *slot;
|
|
|
|
ReorderTuple *topmost = NULL;
|
|
|
|
bool was_exact;
|
|
|
|
Datum *lastfetched_vals;
|
|
|
|
bool *lastfetched_nulls;
|
|
|
|
int cmp;
|
|
|
|
|
2017-03-08 14:15:24 +01:00
|
|
|
estate = node->ss.ps.state;
|
|
|
|
|
2015-05-22 01:47:48 +02:00
|
|
|
/*
|
|
|
|
* Only forward scan is supported with reordering. Note: we can get away
|
|
|
|
* with just Asserting here because the system will not try to run the
|
|
|
|
* plan backwards if ExecSupportsBackwardScan() says it won't work.
|
|
|
|
* Currently, that is guaranteed because no index AMs support both
|
|
|
|
* amcanorderbyop and amcanbackward; if any ever do,
|
|
|
|
* ExecSupportsBackwardScan() will need to consider indexorderbys
|
|
|
|
* explicitly.
|
|
|
|
*/
|
2015-05-15 13:26:51 +02:00
|
|
|
Assert(!ScanDirectionIsBackward(((IndexScan *) node->ss.ps.plan)->indexorderdir));
|
2017-03-08 14:15:24 +01:00
|
|
|
Assert(ScanDirectionIsForward(estate->es_direction));
|
2015-05-22 01:47:48 +02:00
|
|
|
|
2015-05-15 13:26:51 +02:00
|
|
|
scandesc = node->iss_ScanDesc;
|
|
|
|
econtext = node->ss.ps.ps_ExprContext;
|
2019-04-19 20:25:48 +02:00
|
|
|
slot = node->ss.ss_ScanTupleSlot;
|
|
|
|
|
2017-03-08 14:15:24 +01:00
|
|
|
if (scandesc == NULL)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We reach here if the index scan is not parallel, or if we're
|
2018-07-19 15:08:09 +02:00
|
|
|
* serially executing an index scan that was planned to be parallel.
|
2017-03-08 14:15:24 +01:00
|
|
|
*/
|
|
|
|
scandesc = index_beginscan(node->ss.ss_currentRelation,
|
|
|
|
node->iss_RelationDesc,
|
|
|
|
estate->es_snapshot,
|
|
|
|
node->iss_NumScanKeys,
|
|
|
|
node->iss_NumOrderByKeys);
|
|
|
|
|
|
|
|
node->iss_ScanDesc = scandesc;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If no run-time keys to calculate or they are ready, go ahead and
|
|
|
|
* pass the scankeys to the index AM.
|
|
|
|
*/
|
|
|
|
if (node->iss_NumRuntimeKeys == 0 || node->iss_RuntimeKeysReady)
|
|
|
|
index_rescan(scandesc,
|
|
|
|
node->iss_ScanKeys, node->iss_NumScanKeys,
|
|
|
|
node->iss_OrderByKeys, node->iss_NumOrderByKeys);
|
|
|
|
}
|
|
|
|
|
2015-05-15 13:26:51 +02:00
|
|
|
for (;;)
|
|
|
|
{
|
2017-07-26 02:37:17 +02:00
|
|
|
CHECK_FOR_INTERRUPTS();
|
|
|
|
|
2015-05-15 13:26:51 +02:00
|
|
|
/*
|
|
|
|
* Check the reorder queue first. If the topmost tuple in the queue
|
|
|
|
* has an ORDER BY value smaller than (or equal to) the value last
|
|
|
|
* returned by the index, we can return it now.
|
|
|
|
*/
|
|
|
|
if (!pairingheap_is_empty(node->iss_ReorderQueue))
|
|
|
|
{
|
|
|
|
topmost = (ReorderTuple *) pairingheap_first(node->iss_ReorderQueue);
|
|
|
|
|
|
|
|
if (node->iss_ReachedEnd ||
|
|
|
|
cmp_orderbyvals(topmost->orderbyvals,
|
|
|
|
topmost->orderbynulls,
|
|
|
|
scandesc->xs_orderbyvals,
|
|
|
|
scandesc->xs_orderbynulls,
|
|
|
|
node) <= 0)
|
|
|
|
{
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
HeapTuple tuple;
|
|
|
|
|
2015-05-15 13:26:51 +02:00
|
|
|
tuple = reorderqueue_pop(node);
|
|
|
|
|
|
|
|
/* Pass 'true', as the tuple in the queue is a palloc'd copy */
|
2019-04-19 20:25:48 +02:00
|
|
|
ExecForceStoreHeapTuple(tuple, slot, true);
|
2015-05-15 13:26:51 +02:00
|
|
|
return slot;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else if (node->iss_ReachedEnd)
|
|
|
|
{
|
|
|
|
/* Queue is empty, and no more tuples from index. We're done. */
|
2019-04-19 20:25:48 +02:00
|
|
|
return ExecClearTuple(slot);
|
2015-05-15 13:26:51 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Fetch next tuple from the index.
|
|
|
|
*/
|
|
|
|
next_indextuple:
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
if (!index_getnext_slot(scandesc, ForwardScanDirection, slot))
|
2015-05-15 13:26:51 +02:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* No more tuples from the index. But we still need to drain any
|
|
|
|
* remaining tuples from the queue before we're done.
|
|
|
|
*/
|
|
|
|
node->iss_ReachedEnd = true;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the index was lossy, we have to recheck the index quals and
|
|
|
|
* ORDER BY expressions using the fetched tuple.
|
|
|
|
*/
|
|
|
|
if (scandesc->xs_recheck)
|
|
|
|
{
|
|
|
|
econtext->ecxt_scantuple = slot;
|
2018-01-29 21:16:53 +01:00
|
|
|
if (!ExecQualAndReset(node->indexqualorig, econtext))
|
2015-05-15 13:26:51 +02:00
|
|
|
{
|
|
|
|
/* Fails recheck, so drop it and loop back for another */
|
|
|
|
InstrCountFiltered2(node, 1);
|
2017-07-26 02:37:17 +02:00
|
|
|
/* allow this loop to be cancellable */
|
|
|
|
CHECK_FOR_INTERRUPTS();
|
2015-05-15 13:26:51 +02:00
|
|
|
goto next_indextuple;
|
|
|
|
}
|
Fix datatype confusion with the new lossy GiST distance functions.
We can only support a lossy distance function when the distance function's
datatype is comparable with the original ordering operator's datatype.
The distance function always returns a float8, so we are limited to float8,
and float4 (by a hard-coded cast of the float8 to float4).
In light of this limitation, it seems like a good idea to have a separate
'recheck' flag for the ORDER BY expressions, so that if you have a non-lossy
distance function, it still works with lossy quals. There are cases like
that with the build-in or contrib opclasses, but it's plausible.
There was a hidden assumption that the ORDER BY values returned by GiST
match the original ordering operator's return type, but there are plenty
of examples where that's not true, e.g. in btree_gist and pg_trgm. As long
as the distance function is not lossy, we can tolerate that and just not
return the distance to the executor (or rather, always return NULL). The
executor doesn't need the distances if there are no lossy results.
There was another little bug: the recheck variable was not initialized
before calling the distance function. That revealed the bigger issue,
as the executor tried to reorder tuples that didn't need reordering, and
that failed because of the datatype mismatch.
2015-05-15 16:59:46 +02:00
|
|
|
}
|
2015-05-15 13:26:51 +02:00
|
|
|
|
Fix datatype confusion with the new lossy GiST distance functions.
We can only support a lossy distance function when the distance function's
datatype is comparable with the original ordering operator's datatype.
The distance function always returns a float8, so we are limited to float8,
and float4 (by a hard-coded cast of the float8 to float4).
In light of this limitation, it seems like a good idea to have a separate
'recheck' flag for the ORDER BY expressions, so that if you have a non-lossy
distance function, it still works with lossy quals. There are cases like
that with the build-in or contrib opclasses, but it's plausible.
There was a hidden assumption that the ORDER BY values returned by GiST
match the original ordering operator's return type, but there are plenty
of examples where that's not true, e.g. in btree_gist and pg_trgm. As long
as the distance function is not lossy, we can tolerate that and just not
return the distance to the executor (or rather, always return NULL). The
executor doesn't need the distances if there are no lossy results.
There was another little bug: the recheck variable was not initialized
before calling the distance function. That revealed the bigger issue,
as the executor tried to reorder tuples that didn't need reordering, and
that failed because of the datatype mismatch.
2015-05-15 16:59:46 +02:00
|
|
|
if (scandesc->xs_recheckorderby)
|
|
|
|
{
|
|
|
|
econtext->ecxt_scantuple = slot;
|
|
|
|
ResetExprContext(econtext);
|
2015-05-15 13:26:51 +02:00
|
|
|
EvalOrderByExpressions(node, econtext);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Was the ORDER BY value returned by the index accurate? The
|
|
|
|
* recheck flag means that the index can return inaccurate values,
|
|
|
|
* but then again, the value returned for any particular tuple
|
|
|
|
* could also be exactly correct. Compare the value returned by
|
|
|
|
* the index with the recalculated value. (If the value returned
|
|
|
|
* by the index happened to be exact right, we can often avoid
|
|
|
|
* pushing the tuple to the queue, just to pop it back out again.)
|
|
|
|
*/
|
|
|
|
cmp = cmp_orderbyvals(node->iss_OrderByValues,
|
|
|
|
node->iss_OrderByNulls,
|
|
|
|
scandesc->xs_orderbyvals,
|
|
|
|
scandesc->xs_orderbynulls,
|
|
|
|
node);
|
|
|
|
if (cmp < 0)
|
|
|
|
elog(ERROR, "index returned tuples in wrong order");
|
|
|
|
else if (cmp == 0)
|
|
|
|
was_exact = true;
|
|
|
|
else
|
|
|
|
was_exact = false;
|
|
|
|
lastfetched_vals = node->iss_OrderByValues;
|
|
|
|
lastfetched_nulls = node->iss_OrderByNulls;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
was_exact = true;
|
|
|
|
lastfetched_vals = scandesc->xs_orderbyvals;
|
|
|
|
lastfetched_nulls = scandesc->xs_orderbynulls;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Can we return this tuple immediately, or does it need to be pushed
|
|
|
|
* to the reorder queue? If the ORDER BY expression values returned
|
|
|
|
* by the index were inaccurate, we can't return it yet, because the
|
|
|
|
* next tuple from the index might need to come before this one. Also,
|
|
|
|
* we can't return it yet if there are any smaller tuples in the queue
|
|
|
|
* already.
|
|
|
|
*/
|
|
|
|
if (!was_exact || (topmost && cmp_orderbyvals(lastfetched_vals,
|
|
|
|
lastfetched_nulls,
|
|
|
|
topmost->orderbyvals,
|
|
|
|
topmost->orderbynulls,
|
|
|
|
node) > 0))
|
|
|
|
{
|
|
|
|
/* Put this tuple to the queue */
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
reorderqueue_push(node, slot, lastfetched_vals, lastfetched_nulls);
|
2015-05-15 13:26:51 +02:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Can return this tuple immediately. */
|
|
|
|
return slot;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
There's a patch attached to fix gcc 2.8.x warnings, except for the
yyerror ones from bison. It also includes a few 'enhancements' to
the C programming style (which are, of course, personal).
The other patch removes the compilation of backend/lib/qsort.c, as
qsort() is a standard function in stdlib.h and can be used any
where else (and it is). It was only used in
backend/optimizer/geqo/geqo_pool.c, backend/optimizer/path/predmig.c,
and backend/storage/page/bufpage.c
> > Some or all of these changes might not be appropriate for v6.3,
since we > > are in beta testing and since they do not affect the
current functionality. > > For those cases, how about submitting
patches based on the final v6.3 > > release?
There's more to come. Please review these patches. I ran the
regression tests and they only failed where this was expected
(random, geo, etc).
Cheers,
Jeroen
1998-03-30 18:47:35 +02:00
|
|
|
/*
|
|
|
|
* if we get here it means the index scan failed so we are at the end of
|
|
|
|
* the scan..
|
|
|
|
*/
|
2019-04-19 20:25:48 +02:00
|
|
|
return ExecClearTuple(slot);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
2015-05-15 13:26:51 +02:00
|
|
|
/*
|
|
|
|
* Calculate the expressions in the ORDER BY clause, based on the heap tuple.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
EvalOrderByExpressions(IndexScanState *node, ExprContext *econtext)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
ListCell *l;
|
|
|
|
MemoryContext oldContext;
|
|
|
|
|
|
|
|
oldContext = MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
|
|
|
|
|
|
|
|
i = 0;
|
|
|
|
foreach(l, node->indexorderbyorig)
|
|
|
|
{
|
|
|
|
ExprState *orderby = (ExprState *) lfirst(l);
|
|
|
|
|
|
|
|
node->iss_OrderByValues[i] = ExecEvalExpr(orderby,
|
|
|
|
econtext,
|
2017-01-19 23:12:38 +01:00
|
|
|
&node->iss_OrderByNulls[i]);
|
2015-05-15 13:26:51 +02:00
|
|
|
i++;
|
|
|
|
}
|
|
|
|
|
|
|
|
MemoryContextSwitchTo(oldContext);
|
|
|
|
}
|
|
|
|
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
/*
|
|
|
|
* IndexRecheck -- access method routine to recheck a tuple in EvalPlanQual
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
IndexRecheck(IndexScanState *node, TupleTableSlot *slot)
|
|
|
|
{
|
|
|
|
ExprContext *econtext;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* extract necessary information from index scan node
|
|
|
|
*/
|
|
|
|
econtext = node->ss.ps.ps_ExprContext;
|
|
|
|
|
|
|
|
/* Does the tuple meet the indexqual condition? */
|
|
|
|
econtext->ecxt_scantuple = slot;
|
2018-01-29 21:16:53 +01:00
|
|
|
return ExecQualAndReset(node->indexqualorig, econtext);
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
}
|
|
|
|
|
2015-05-15 13:26:51 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Compare ORDER BY expression values.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
cmp_orderbyvals(const Datum *adist, const bool *anulls,
|
|
|
|
const Datum *bdist, const bool *bnulls,
|
|
|
|
IndexScanState *node)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
int result;
|
|
|
|
|
|
|
|
for (i = 0; i < node->iss_NumOrderByKeys; i++)
|
|
|
|
{
|
|
|
|
SortSupport ssup = &node->iss_SortSupport[i];
|
|
|
|
|
2015-05-22 01:47:48 +02:00
|
|
|
/*
|
|
|
|
* Handle nulls. We only need to support NULLS LAST ordering, because
|
|
|
|
* match_pathkeys_to_index() doesn't consider indexorderby
|
|
|
|
* implementation otherwise.
|
|
|
|
*/
|
2015-05-15 13:26:51 +02:00
|
|
|
if (anulls[i] && !bnulls[i])
|
|
|
|
return 1;
|
|
|
|
else if (!anulls[i] && bnulls[i])
|
|
|
|
return -1;
|
|
|
|
else if (anulls[i] && bnulls[i])
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
result = ssup->comparator(adist[i], bdist[i], ssup);
|
|
|
|
if (result != 0)
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Pairing heap provides getting topmost (greatest) element while KNN provides
|
2015-05-22 01:47:48 +02:00
|
|
|
* ascending sort. That's why we invert the sort order.
|
2015-05-15 13:26:51 +02:00
|
|
|
*/
|
|
|
|
static int
|
|
|
|
reorderqueue_cmp(const pairingheap_node *a, const pairingheap_node *b,
|
|
|
|
void *arg)
|
|
|
|
{
|
|
|
|
ReorderTuple *rta = (ReorderTuple *) a;
|
|
|
|
ReorderTuple *rtb = (ReorderTuple *) b;
|
|
|
|
IndexScanState *node = (IndexScanState *) arg;
|
|
|
|
|
Allow btree comparison functions to return INT_MIN.
Historically we forbade datatype-specific comparison functions from
returning INT_MIN, so that it would be safe to invert the sort order
just by negating the comparison result. However, this was never
really safe for comparison functions that directly return the result
of memcmp(), strcmp(), etc, as POSIX doesn't place any such restriction
on those library functions. Buildfarm results show that at least on
recent Linux on s390x, memcmp() actually does return INT_MIN sometimes,
causing sort failures.
The agreed-on answer is to remove this restriction and fix relevant
call sites to not make such an assumption; code such as "res = -res"
should be replaced by "INVERT_COMPARE_RESULT(res)". The same is needed
in a few places that just directly negated the result of memcmp or
strcmp.
To help find places having this problem, I've also added a compile option
to nbtcompare.c that causes some of the commonly used comparators to
return INT_MIN/INT_MAX instead of their usual -1/+1. It'd likely be
a good idea to have at least one buildfarm member running with
"-DSTRESS_SORT_INT_MIN". That's far from a complete test of course,
but it should help to prevent fresh introductions of such bugs.
This is a longstanding portability hazard, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/20180928185215.ffoq2xrq5d3pafna@alap3.anarazel.de
2018-10-05 22:01:29 +02:00
|
|
|
/* exchange argument order to invert the sort order */
|
|
|
|
return cmp_orderbyvals(rtb->orderbyvals, rtb->orderbynulls,
|
|
|
|
rta->orderbyvals, rta->orderbynulls,
|
|
|
|
node);
|
2015-05-15 13:26:51 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Helper function to push a tuple to the reorder queue.
|
|
|
|
*/
|
|
|
|
static void
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
reorderqueue_push(IndexScanState *node, TupleTableSlot *slot,
|
2015-05-15 13:26:51 +02:00
|
|
|
Datum *orderbyvals, bool *orderbynulls)
|
|
|
|
{
|
|
|
|
IndexScanDesc scandesc = node->iss_ScanDesc;
|
|
|
|
EState *estate = node->ss.ps.state;
|
|
|
|
MemoryContext oldContext = MemoryContextSwitchTo(estate->es_query_cxt);
|
|
|
|
ReorderTuple *rt;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
rt = (ReorderTuple *) palloc(sizeof(ReorderTuple));
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
rt->htup = ExecCopySlotHeapTuple(slot);
|
2015-05-15 13:26:51 +02:00
|
|
|
rt->orderbyvals =
|
|
|
|
(Datum *) palloc(sizeof(Datum) * scandesc->numberOfOrderBys);
|
|
|
|
rt->orderbynulls =
|
|
|
|
(bool *) palloc(sizeof(bool) * scandesc->numberOfOrderBys);
|
|
|
|
for (i = 0; i < node->iss_NumOrderByKeys; i++)
|
|
|
|
{
|
|
|
|
if (!orderbynulls[i])
|
|
|
|
rt->orderbyvals[i] = datumCopy(orderbyvals[i],
|
|
|
|
node->iss_OrderByTypByVals[i],
|
|
|
|
node->iss_OrderByTypLens[i]);
|
|
|
|
else
|
|
|
|
rt->orderbyvals[i] = (Datum) 0;
|
|
|
|
rt->orderbynulls[i] = orderbynulls[i];
|
|
|
|
}
|
|
|
|
pairingheap_add(node->iss_ReorderQueue, &rt->ph_node);
|
|
|
|
|
|
|
|
MemoryContextSwitchTo(oldContext);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Helper function to pop the next tuple from the reorder queue.
|
|
|
|
*/
|
|
|
|
static HeapTuple
|
|
|
|
reorderqueue_pop(IndexScanState *node)
|
|
|
|
{
|
|
|
|
HeapTuple result;
|
|
|
|
ReorderTuple *topmost;
|
2015-05-23 21:22:25 +02:00
|
|
|
int i;
|
2015-05-15 13:26:51 +02:00
|
|
|
|
|
|
|
topmost = (ReorderTuple *) pairingheap_remove_first(node->iss_ReorderQueue);
|
|
|
|
|
|
|
|
result = topmost->htup;
|
2015-05-23 21:22:25 +02:00
|
|
|
for (i = 0; i < node->iss_NumOrderByKeys; i++)
|
|
|
|
{
|
|
|
|
if (!node->iss_OrderByTypByVals[i] && !topmost->orderbynulls[i])
|
|
|
|
pfree(DatumGetPointer(topmost->orderbyvals[i]));
|
|
|
|
}
|
2015-05-15 13:26:51 +02:00
|
|
|
pfree(topmost->orderbyvals);
|
|
|
|
pfree(topmost->orderbynulls);
|
|
|
|
pfree(topmost);
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* ExecIndexScan(node)
|
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
2017-07-17 09:33:49 +02:00
|
|
|
static TupleTableSlot *
|
|
|
|
ExecIndexScan(PlanState *pstate)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2017-07-17 09:33:49 +02:00
|
|
|
IndexScanState *node = castNode(IndexScanState, pstate);
|
|
|
|
|
2000-08-13 04:50:35 +02:00
|
|
|
/*
|
|
|
|
* If we have runtime keys and they've not already been set up, do it now.
|
|
|
|
*/
|
2005-11-25 20:47:50 +01:00
|
|
|
if (node->iss_NumRuntimeKeys != 0 && !node->iss_RuntimeKeysReady)
|
2010-07-12 19:01:06 +02:00
|
|
|
ExecReScan((PlanState *) node);
|
2000-08-13 04:50:35 +02:00
|
|
|
|
2015-05-15 13:26:51 +02:00
|
|
|
if (node->iss_NumOrderByKeys > 0)
|
|
|
|
return ExecScan(&node->ss,
|
|
|
|
(ExecScanAccessMtd) IndexNextWithReorder,
|
|
|
|
(ExecScanRecheckMtd) IndexRecheck);
|
|
|
|
else
|
|
|
|
return ExecScan(&node->ss,
|
|
|
|
(ExecScanAccessMtd) IndexNext,
|
|
|
|
(ExecScanRecheckMtd) IndexRecheck);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
2010-07-12 19:01:06 +02:00
|
|
|
* ExecReScanIndexScan(node)
|
|
|
|
*
|
|
|
|
* Recalculates the values of any scan keys whose value depends on
|
|
|
|
* information known at runtime, then rescans the indexed relation.
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
* Updating the scan key was formerly done separately in
|
2000-07-12 04:37:39 +02:00
|
|
|
* ExecUpdateIndexScanKeys. Integrating it into ReScan makes
|
|
|
|
* rescans of indices and relations/general streams more uniform.
|
1996-07-09 08:22:35 +02:00
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
void
|
2010-07-12 19:01:06 +02:00
|
|
|
ExecReScanIndexScan(IndexScanState *node)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
1998-08-02 00:12:13 +02:00
|
|
|
/*
|
2010-07-12 19:01:06 +02:00
|
|
|
* If we are doing runtime key calculations (ie, any of the index key
|
|
|
|
* values weren't simple Consts), compute the new key values. But first,
|
|
|
|
* reset the context so we don't leak memory as each outer tuple is
|
|
|
|
* scanned. Note this assumes that we will recalculate *all* runtime keys
|
|
|
|
* on each call.
|
1998-08-02 00:12:13 +02:00
|
|
|
*/
|
2005-11-25 20:47:50 +01:00
|
|
|
if (node->iss_NumRuntimeKeys != 0)
|
2010-07-12 19:01:06 +02:00
|
|
|
{
|
|
|
|
ExprContext *econtext = node->iss_RuntimeContext;
|
|
|
|
|
|
|
|
ResetExprContext(econtext);
|
2005-04-25 03:30:14 +02:00
|
|
|
ExecIndexEvalRuntimeKeys(econtext,
|
2005-11-25 20:47:50 +01:00
|
|
|
node->iss_RuntimeKeys,
|
|
|
|
node->iss_NumRuntimeKeys);
|
2010-07-12 19:01:06 +02:00
|
|
|
}
|
2005-11-25 20:47:50 +01:00
|
|
|
node->iss_RuntimeKeysReady = true;
|
2002-02-11 21:10:50 +01:00
|
|
|
|
2015-05-25 13:42:21 +02:00
|
|
|
/* flush the reorder queue */
|
|
|
|
if (node->iss_ReorderQueue)
|
|
|
|
{
|
2022-02-14 01:26:55 +01:00
|
|
|
HeapTuple tuple;
|
2022-05-12 21:17:30 +02:00
|
|
|
|
2015-05-25 13:42:21 +02:00
|
|
|
while (!pairingheap_is_empty(node->iss_ReorderQueue))
|
2022-02-14 01:26:55 +01:00
|
|
|
{
|
|
|
|
tuple = reorderqueue_pop(node);
|
|
|
|
heap_freetuple(tuple);
|
|
|
|
}
|
2015-05-25 13:42:21 +02:00
|
|
|
}
|
|
|
|
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
/* reset index scan */
|
2017-02-15 19:53:24 +01:00
|
|
|
if (node->iss_ScanDesc)
|
|
|
|
index_rescan(node->iss_ScanDesc,
|
|
|
|
node->iss_ScanKeys, node->iss_NumScanKeys,
|
|
|
|
node->iss_OrderByKeys, node->iss_NumOrderByKeys);
|
2015-05-25 13:42:21 +02:00
|
|
|
node->iss_ReachedEnd = false;
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
|
|
|
|
ExecScanReScan(&node->ss);
|
2005-04-25 03:30:14 +02:00
|
|
|
}
|
2003-08-22 22:26:43 +02:00
|
|
|
|
2002-02-11 21:10:50 +01:00
|
|
|
|
2005-04-25 03:30:14 +02:00
|
|
|
/*
|
|
|
|
* ExecIndexEvalRuntimeKeys
|
|
|
|
* Evaluate any runtime key values, and update the scankeys.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ExecIndexEvalRuntimeKeys(ExprContext *econtext,
|
2005-11-25 20:47:50 +01:00
|
|
|
IndexRuntimeKeyInfo *runtimeKeys, int numRuntimeKeys)
|
2005-04-25 03:30:14 +02:00
|
|
|
{
|
|
|
|
int j;
|
2009-08-23 20:26:08 +02:00
|
|
|
MemoryContext oldContext;
|
|
|
|
|
|
|
|
/* We want to keep the key values in per-tuple memory */
|
|
|
|
oldContext = MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
|
2002-02-11 21:10:50 +01:00
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
for (j = 0; j < numRuntimeKeys; j++)
|
2005-04-25 03:30:14 +02:00
|
|
|
{
|
2005-11-25 20:47:50 +01:00
|
|
|
ScanKey scan_key = runtimeKeys[j].scan_key;
|
|
|
|
ExprState *key_expr = runtimeKeys[j].key_expr;
|
|
|
|
Datum scanvalue;
|
|
|
|
bool isNull;
|
|
|
|
|
2005-04-25 03:30:14 +02:00
|
|
|
/*
|
2005-11-25 20:47:50 +01:00
|
|
|
* For each run-time key, extract the run-time expression and evaluate
|
2010-07-12 19:01:06 +02:00
|
|
|
* it with respect to the current context. We then stick the result
|
|
|
|
* into the proper scan key.
|
2005-04-25 03:30:14 +02:00
|
|
|
*
|
|
|
|
* Note: the result of the eval could be a pass-by-ref value that's
|
2010-07-12 19:01:06 +02:00
|
|
|
* stored in some outer scan's tuple, not in
|
2005-04-25 03:30:14 +02:00
|
|
|
* econtext->ecxt_per_tuple_memory. We assume that the outer tuple
|
|
|
|
* will stay put throughout our scan. If this is wrong, we could copy
|
|
|
|
* the result into our context explicitly, but I think that's not
|
2009-08-23 20:26:08 +02:00
|
|
|
* necessary.
|
|
|
|
*
|
|
|
|
* It's also entirely possible that the result of the eval is a
|
|
|
|
* toasted value. In this case we should forcibly detoast it, to
|
|
|
|
* avoid repeat detoastings each time the value is examined by an
|
|
|
|
* index support function.
|
2005-04-25 03:30:14 +02:00
|
|
|
*/
|
2009-08-23 20:26:08 +02:00
|
|
|
scanvalue = ExecEvalExpr(key_expr,
|
|
|
|
econtext,
|
2017-01-19 23:12:38 +01:00
|
|
|
&isNull);
|
2005-11-25 20:47:50 +01:00
|
|
|
if (isNull)
|
2009-08-23 20:26:08 +02:00
|
|
|
{
|
|
|
|
scan_key->sk_argument = scanvalue;
|
2005-11-25 20:47:50 +01:00
|
|
|
scan_key->sk_flags |= SK_ISNULL;
|
2009-08-23 20:26:08 +02:00
|
|
|
}
|
2005-11-25 20:47:50 +01:00
|
|
|
else
|
2009-08-23 20:26:08 +02:00
|
|
|
{
|
|
|
|
if (runtimeKeys[j].key_toastable)
|
|
|
|
scanvalue = PointerGetDatum(PG_DETOAST_DATUM(scanvalue));
|
|
|
|
scan_key->sk_argument = scanvalue;
|
2005-11-25 20:47:50 +01:00
|
|
|
scan_key->sk_flags &= ~SK_ISNULL;
|
2009-08-23 20:26:08 +02:00
|
|
|
}
|
2005-11-25 20:47:50 +01:00
|
|
|
}
|
2009-08-23 20:26:08 +02:00
|
|
|
|
|
|
|
MemoryContextSwitchTo(oldContext);
|
2005-11-25 20:47:50 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ExecIndexEvalArrayKeys
|
|
|
|
* Evaluate any array key values, and set up to iterate through arrays.
|
|
|
|
*
|
2017-08-16 06:22:32 +02:00
|
|
|
* Returns true if there are array elements to consider; false means there
|
|
|
|
* is at least one null or empty array, so no match is possible. On true
|
2005-11-25 20:47:50 +01:00
|
|
|
* result, the scankeys are initialized with the first elements of the arrays.
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
ExecIndexEvalArrayKeys(ExprContext *econtext,
|
|
|
|
IndexArrayKeyInfo *arrayKeys, int numArrayKeys)
|
|
|
|
{
|
|
|
|
bool result = true;
|
|
|
|
int j;
|
|
|
|
MemoryContext oldContext;
|
|
|
|
|
|
|
|
/* We want to keep the arrays in per-tuple memory */
|
|
|
|
oldContext = MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
|
|
|
|
|
|
|
|
for (j = 0; j < numArrayKeys; j++)
|
|
|
|
{
|
|
|
|
ScanKey scan_key = arrayKeys[j].scan_key;
|
|
|
|
ExprState *array_expr = arrayKeys[j].array_expr;
|
|
|
|
Datum arraydatum;
|
|
|
|
bool isNull;
|
|
|
|
ArrayType *arrayval;
|
|
|
|
int16 elmlen;
|
|
|
|
bool elmbyval;
|
|
|
|
char elmalign;
|
|
|
|
int num_elems;
|
|
|
|
Datum *elem_values;
|
|
|
|
bool *elem_nulls;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Compute and deconstruct the array expression. (Notes in
|
|
|
|
* ExecIndexEvalRuntimeKeys() apply here too.)
|
|
|
|
*/
|
|
|
|
arraydatum = ExecEvalExpr(array_expr,
|
|
|
|
econtext,
|
2017-01-19 23:12:38 +01:00
|
|
|
&isNull);
|
2005-11-25 20:47:50 +01:00
|
|
|
if (isNull)
|
2005-04-25 03:30:14 +02:00
|
|
|
{
|
2005-11-25 20:47:50 +01:00
|
|
|
result = false;
|
|
|
|
break; /* no point in evaluating more */
|
|
|
|
}
|
|
|
|
arrayval = DatumGetArrayTypeP(arraydatum);
|
|
|
|
/* We could cache this data, but not clear it's worth it */
|
|
|
|
get_typlenbyvalalign(ARR_ELEMTYPE(arrayval),
|
|
|
|
&elmlen, &elmbyval, &elmalign);
|
|
|
|
deconstruct_array(arrayval,
|
|
|
|
ARR_ELEMTYPE(arrayval),
|
|
|
|
elmlen, elmbyval, elmalign,
|
|
|
|
&elem_values, &elem_nulls, &num_elems);
|
|
|
|
if (num_elems <= 0)
|
|
|
|
{
|
|
|
|
result = false;
|
|
|
|
break; /* no point in evaluating more */
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Note: we expect the previous array data, if any, to be
|
|
|
|
* automatically freed by resetting the per-tuple context; hence no
|
|
|
|
* pfree's here.
|
|
|
|
*/
|
|
|
|
arrayKeys[j].elem_values = elem_values;
|
|
|
|
arrayKeys[j].elem_nulls = elem_nulls;
|
|
|
|
arrayKeys[j].num_elems = num_elems;
|
|
|
|
scan_key->sk_argument = elem_values[0];
|
|
|
|
if (elem_nulls[0])
|
|
|
|
scan_key->sk_flags |= SK_ISNULL;
|
|
|
|
else
|
|
|
|
scan_key->sk_flags &= ~SK_ISNULL;
|
|
|
|
arrayKeys[j].next_elem = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
MemoryContextSwitchTo(oldContext);
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ExecIndexAdvanceArrayKeys
|
|
|
|
* Advance to the next set of array key values, if any.
|
|
|
|
*
|
2017-08-16 06:22:32 +02:00
|
|
|
* Returns true if there is another set of values to consider, false if not.
|
|
|
|
* On true result, the scankeys are initialized with the next set of values.
|
2005-11-25 20:47:50 +01:00
|
|
|
*/
|
|
|
|
bool
|
|
|
|
ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys)
|
|
|
|
{
|
|
|
|
bool found = false;
|
|
|
|
int j;
|
|
|
|
|
2008-03-18 04:54:52 +01:00
|
|
|
/*
|
|
|
|
* Note we advance the rightmost array key most quickly, since it will
|
|
|
|
* correspond to the lowest-order index column among the available
|
|
|
|
* qualifications. This is hypothesized to result in better locality of
|
|
|
|
* access in the index.
|
|
|
|
*/
|
|
|
|
for (j = numArrayKeys - 1; j >= 0; j--)
|
2005-11-25 20:47:50 +01:00
|
|
|
{
|
|
|
|
ScanKey scan_key = arrayKeys[j].scan_key;
|
|
|
|
int next_elem = arrayKeys[j].next_elem;
|
|
|
|
int num_elems = arrayKeys[j].num_elems;
|
|
|
|
Datum *elem_values = arrayKeys[j].elem_values;
|
|
|
|
bool *elem_nulls = arrayKeys[j].elem_nulls;
|
|
|
|
|
|
|
|
if (next_elem >= num_elems)
|
|
|
|
{
|
|
|
|
next_elem = 0;
|
|
|
|
found = false; /* need to advance next array key */
|
2005-04-25 03:30:14 +02:00
|
|
|
}
|
2005-11-25 20:47:50 +01:00
|
|
|
else
|
|
|
|
found = true;
|
|
|
|
scan_key->sk_argument = elem_values[next_elem];
|
|
|
|
if (elem_nulls[next_elem])
|
|
|
|
scan_key->sk_flags |= SK_ISNULL;
|
|
|
|
else
|
|
|
|
scan_key->sk_flags &= ~SK_ISNULL;
|
|
|
|
arrayKeys[j].next_elem = next_elem + 1;
|
|
|
|
if (found)
|
|
|
|
break;
|
2002-02-11 21:10:50 +01:00
|
|
|
}
|
2005-11-25 20:47:50 +01:00
|
|
|
|
|
|
|
return found;
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* ExecEndIndexScan
|
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
void
|
2002-12-05 16:50:39 +01:00
|
|
|
ExecEndIndexScan(IndexScanState *node)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2005-04-25 03:30:14 +02:00
|
|
|
Relation indexRelationDesc;
|
|
|
|
IndexScanDesc indexScanDesc;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
|
|
|
* extract information from the node
|
|
|
|
*/
|
2005-04-25 03:30:14 +02:00
|
|
|
indexRelationDesc = node->iss_RelationDesc;
|
|
|
|
indexScanDesc = node->iss_ScanDesc;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2007-05-25 19:54:25 +02:00
|
|
|
* close the index relation (no-op if we didn't open it)
|
2002-02-19 21:11:20 +01:00
|
|
|
*/
|
2007-05-25 19:54:25 +02:00
|
|
|
if (indexScanDesc)
|
|
|
|
index_endscan(indexScanDesc);
|
|
|
|
if (indexRelationDesc)
|
|
|
|
index_close(indexRelationDesc, NoLock);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* ExecIndexMarkPos
|
Avoid crash during EvalPlanQual recheck of an inner indexscan.
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
2018-01-27 19:52:24 +01:00
|
|
|
*
|
|
|
|
* Note: we assume that no caller attempts to set a mark before having read
|
|
|
|
* at least one tuple. Otherwise, iss_ScanDesc might still be NULL.
|
1996-07-09 08:22:35 +02:00
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
void
|
2002-12-05 16:50:39 +01:00
|
|
|
ExecIndexMarkPos(IndexScanState *node)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
Avoid crash during EvalPlanQual recheck of an inner indexscan.
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
2018-01-27 19:52:24 +01:00
|
|
|
EState *estate = node->ss.ps.state;
|
Reorder EPQ work, to fix rowmark related bugs and improve efficiency.
In ad0bda5d24ea I changed the EvalPlanQual machinery to store
substitution tuples in slot, instead of using plain HeapTuples. The
main motivation for that was that using HeapTuples will be inefficient
for future tableams. But it turns out that that conversion was buggy
for non-locking rowmarks - the wrong tuple descriptor was used to
create the slot.
As a secondary issue 5db6df0c0 changed ExecLockRows() to begin EPQ
earlier, to allow to fetch the locked rows directly into the EPQ
slots, instead of having to copy tuples around. Unfortunately, as Tom
complained, that forces some expensive initialization to happen
earlier.
As a third issue, the test coverage for EPQ was clearly insufficient.
Fixing the first issue is unfortunately not trivial: Non-locked row
marks were fetched at the start of EPQ, and we don't have the type
information for the rowmarks available at that point. While we could
change that, it's not easy. It might be worthwhile to change that at
some point, but to fix this bug, it seems better to delay fetching
non-locking rowmarks when they're actually needed, rather than
eagerly. They're referenced at most once, and in cases where EPQ
fails, might never be referenced. Fetching them when needed also
increases locality a bit.
To be able to fetch rowmarks during execution, rather than
initialization, we need to be able to access the active EPQState, as
that contains necessary data. To do so move EPQ related data from
EState to EPQState, and, only for EStates creates as part of EPQ,
reference the associated EPQState from EState.
To fix the second issue, change EPQ initialization to allow use of
EvalPlanQualSlot() to be used before EvalPlanQualBegin() (but
obviously still requiring EvalPlanQualInit() to have been done).
As these changes made struct EState harder to understand, e.g. by
adding multiple EStates, significantly reorder the members, and add a
lot more comments.
Also add a few more EPQ tests, including one that fails for the first
issue above. More is needed.
Reported-By: yi huang
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion:
https://postgr.es/m/CAHU7rYZo_C4ULsAx_LAj8az9zqgrD8WDd4hTegDTMM1LMqrBsg@mail.gmail.com
https://postgr.es/m/24530.1562686693@sss.pgh.pa.us
Backpatch: 12-, where the EPQ changes were introduced
2019-09-05 22:00:20 +02:00
|
|
|
EPQState *epqstate = estate->es_epq_active;
|
Avoid crash during EvalPlanQual recheck of an inner indexscan.
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
2018-01-27 19:52:24 +01:00
|
|
|
|
Reorder EPQ work, to fix rowmark related bugs and improve efficiency.
In ad0bda5d24ea I changed the EvalPlanQual machinery to store
substitution tuples in slot, instead of using plain HeapTuples. The
main motivation for that was that using HeapTuples will be inefficient
for future tableams. But it turns out that that conversion was buggy
for non-locking rowmarks - the wrong tuple descriptor was used to
create the slot.
As a secondary issue 5db6df0c0 changed ExecLockRows() to begin EPQ
earlier, to allow to fetch the locked rows directly into the EPQ
slots, instead of having to copy tuples around. Unfortunately, as Tom
complained, that forces some expensive initialization to happen
earlier.
As a third issue, the test coverage for EPQ was clearly insufficient.
Fixing the first issue is unfortunately not trivial: Non-locked row
marks were fetched at the start of EPQ, and we don't have the type
information for the rowmarks available at that point. While we could
change that, it's not easy. It might be worthwhile to change that at
some point, but to fix this bug, it seems better to delay fetching
non-locking rowmarks when they're actually needed, rather than
eagerly. They're referenced at most once, and in cases where EPQ
fails, might never be referenced. Fetching them when needed also
increases locality a bit.
To be able to fetch rowmarks during execution, rather than
initialization, we need to be able to access the active EPQState, as
that contains necessary data. To do so move EPQ related data from
EState to EPQState, and, only for EStates creates as part of EPQ,
reference the associated EPQState from EState.
To fix the second issue, change EPQ initialization to allow use of
EvalPlanQualSlot() to be used before EvalPlanQualBegin() (but
obviously still requiring EvalPlanQualInit() to have been done).
As these changes made struct EState harder to understand, e.g. by
adding multiple EStates, significantly reorder the members, and add a
lot more comments.
Also add a few more EPQ tests, including one that fails for the first
issue above. More is needed.
Reported-By: yi huang
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion:
https://postgr.es/m/CAHU7rYZo_C4ULsAx_LAj8az9zqgrD8WDd4hTegDTMM1LMqrBsg@mail.gmail.com
https://postgr.es/m/24530.1562686693@sss.pgh.pa.us
Backpatch: 12-, where the EPQ changes were introduced
2019-09-05 22:00:20 +02:00
|
|
|
if (epqstate != NULL)
|
Avoid crash during EvalPlanQual recheck of an inner indexscan.
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
2018-01-27 19:52:24 +01:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We are inside an EvalPlanQual recheck. If a test tuple exists for
|
|
|
|
* this relation, then we shouldn't access the index at all. We would
|
|
|
|
* instead need to save, and later restore, the state of the
|
Reorder EPQ work, to fix rowmark related bugs and improve efficiency.
In ad0bda5d24ea I changed the EvalPlanQual machinery to store
substitution tuples in slot, instead of using plain HeapTuples. The
main motivation for that was that using HeapTuples will be inefficient
for future tableams. But it turns out that that conversion was buggy
for non-locking rowmarks - the wrong tuple descriptor was used to
create the slot.
As a secondary issue 5db6df0c0 changed ExecLockRows() to begin EPQ
earlier, to allow to fetch the locked rows directly into the EPQ
slots, instead of having to copy tuples around. Unfortunately, as Tom
complained, that forces some expensive initialization to happen
earlier.
As a third issue, the test coverage for EPQ was clearly insufficient.
Fixing the first issue is unfortunately not trivial: Non-locked row
marks were fetched at the start of EPQ, and we don't have the type
information for the rowmarks available at that point. While we could
change that, it's not easy. It might be worthwhile to change that at
some point, but to fix this bug, it seems better to delay fetching
non-locking rowmarks when they're actually needed, rather than
eagerly. They're referenced at most once, and in cases where EPQ
fails, might never be referenced. Fetching them when needed also
increases locality a bit.
To be able to fetch rowmarks during execution, rather than
initialization, we need to be able to access the active EPQState, as
that contains necessary data. To do so move EPQ related data from
EState to EPQState, and, only for EStates creates as part of EPQ,
reference the associated EPQState from EState.
To fix the second issue, change EPQ initialization to allow use of
EvalPlanQualSlot() to be used before EvalPlanQualBegin() (but
obviously still requiring EvalPlanQualInit() to have been done).
As these changes made struct EState harder to understand, e.g. by
adding multiple EStates, significantly reorder the members, and add a
lot more comments.
Also add a few more EPQ tests, including one that fails for the first
issue above. More is needed.
Reported-By: yi huang
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion:
https://postgr.es/m/CAHU7rYZo_C4ULsAx_LAj8az9zqgrD8WDd4hTegDTMM1LMqrBsg@mail.gmail.com
https://postgr.es/m/24530.1562686693@sss.pgh.pa.us
Backpatch: 12-, where the EPQ changes were introduced
2019-09-05 22:00:20 +02:00
|
|
|
* relsubs_done flag, so that re-fetching the test tuple is possible.
|
|
|
|
* However, given the assumption that no caller sets a mark at the
|
|
|
|
* start of the scan, we can only get here with relsubs_done[i]
|
Avoid crash during EvalPlanQual recheck of an inner indexscan.
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
2018-01-27 19:52:24 +01:00
|
|
|
* already set, and so no state need be saved.
|
|
|
|
*/
|
|
|
|
Index scanrelid = ((Scan *) node->ss.ps.plan)->scanrelid;
|
|
|
|
|
|
|
|
Assert(scanrelid > 0);
|
Reorder EPQ work, to fix rowmark related bugs and improve efficiency.
In ad0bda5d24ea I changed the EvalPlanQual machinery to store
substitution tuples in slot, instead of using plain HeapTuples. The
main motivation for that was that using HeapTuples will be inefficient
for future tableams. But it turns out that that conversion was buggy
for non-locking rowmarks - the wrong tuple descriptor was used to
create the slot.
As a secondary issue 5db6df0c0 changed ExecLockRows() to begin EPQ
earlier, to allow to fetch the locked rows directly into the EPQ
slots, instead of having to copy tuples around. Unfortunately, as Tom
complained, that forces some expensive initialization to happen
earlier.
As a third issue, the test coverage for EPQ was clearly insufficient.
Fixing the first issue is unfortunately not trivial: Non-locked row
marks were fetched at the start of EPQ, and we don't have the type
information for the rowmarks available at that point. While we could
change that, it's not easy. It might be worthwhile to change that at
some point, but to fix this bug, it seems better to delay fetching
non-locking rowmarks when they're actually needed, rather than
eagerly. They're referenced at most once, and in cases where EPQ
fails, might never be referenced. Fetching them when needed also
increases locality a bit.
To be able to fetch rowmarks during execution, rather than
initialization, we need to be able to access the active EPQState, as
that contains necessary data. To do so move EPQ related data from
EState to EPQState, and, only for EStates creates as part of EPQ,
reference the associated EPQState from EState.
To fix the second issue, change EPQ initialization to allow use of
EvalPlanQualSlot() to be used before EvalPlanQualBegin() (but
obviously still requiring EvalPlanQualInit() to have been done).
As these changes made struct EState harder to understand, e.g. by
adding multiple EStates, significantly reorder the members, and add a
lot more comments.
Also add a few more EPQ tests, including one that fails for the first
issue above. More is needed.
Reported-By: yi huang
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion:
https://postgr.es/m/CAHU7rYZo_C4ULsAx_LAj8az9zqgrD8WDd4hTegDTMM1LMqrBsg@mail.gmail.com
https://postgr.es/m/24530.1562686693@sss.pgh.pa.us
Backpatch: 12-, where the EPQ changes were introduced
2019-09-05 22:00:20 +02:00
|
|
|
if (epqstate->relsubs_slot[scanrelid - 1] != NULL ||
|
|
|
|
epqstate->relsubs_rowmark[scanrelid - 1] != NULL)
|
Avoid crash during EvalPlanQual recheck of an inner indexscan.
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
2018-01-27 19:52:24 +01:00
|
|
|
{
|
|
|
|
/* Verify the claim above */
|
Reorder EPQ work, to fix rowmark related bugs and improve efficiency.
In ad0bda5d24ea I changed the EvalPlanQual machinery to store
substitution tuples in slot, instead of using plain HeapTuples. The
main motivation for that was that using HeapTuples will be inefficient
for future tableams. But it turns out that that conversion was buggy
for non-locking rowmarks - the wrong tuple descriptor was used to
create the slot.
As a secondary issue 5db6df0c0 changed ExecLockRows() to begin EPQ
earlier, to allow to fetch the locked rows directly into the EPQ
slots, instead of having to copy tuples around. Unfortunately, as Tom
complained, that forces some expensive initialization to happen
earlier.
As a third issue, the test coverage for EPQ was clearly insufficient.
Fixing the first issue is unfortunately not trivial: Non-locked row
marks were fetched at the start of EPQ, and we don't have the type
information for the rowmarks available at that point. While we could
change that, it's not easy. It might be worthwhile to change that at
some point, but to fix this bug, it seems better to delay fetching
non-locking rowmarks when they're actually needed, rather than
eagerly. They're referenced at most once, and in cases where EPQ
fails, might never be referenced. Fetching them when needed also
increases locality a bit.
To be able to fetch rowmarks during execution, rather than
initialization, we need to be able to access the active EPQState, as
that contains necessary data. To do so move EPQ related data from
EState to EPQState, and, only for EStates creates as part of EPQ,
reference the associated EPQState from EState.
To fix the second issue, change EPQ initialization to allow use of
EvalPlanQualSlot() to be used before EvalPlanQualBegin() (but
obviously still requiring EvalPlanQualInit() to have been done).
As these changes made struct EState harder to understand, e.g. by
adding multiple EStates, significantly reorder the members, and add a
lot more comments.
Also add a few more EPQ tests, including one that fails for the first
issue above. More is needed.
Reported-By: yi huang
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion:
https://postgr.es/m/CAHU7rYZo_C4ULsAx_LAj8az9zqgrD8WDd4hTegDTMM1LMqrBsg@mail.gmail.com
https://postgr.es/m/24530.1562686693@sss.pgh.pa.us
Backpatch: 12-, where the EPQ changes were introduced
2019-09-05 22:00:20 +02:00
|
|
|
if (!epqstate->relsubs_done[scanrelid - 1])
|
Avoid crash during EvalPlanQual recheck of an inner indexscan.
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
2018-01-27 19:52:24 +01:00
|
|
|
elog(ERROR, "unexpected ExecIndexMarkPos call in EPQ recheck");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-04-25 03:30:14 +02:00
|
|
|
index_markpos(node->iss_ScanDesc);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* ExecIndexRestrPos
|
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
void
|
2002-12-05 16:50:39 +01:00
|
|
|
ExecIndexRestrPos(IndexScanState *node)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
Avoid crash during EvalPlanQual recheck of an inner indexscan.
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
2018-01-27 19:52:24 +01:00
|
|
|
EState *estate = node->ss.ps.state;
|
Reorder EPQ work, to fix rowmark related bugs and improve efficiency.
In ad0bda5d24ea I changed the EvalPlanQual machinery to store
substitution tuples in slot, instead of using plain HeapTuples. The
main motivation for that was that using HeapTuples will be inefficient
for future tableams. But it turns out that that conversion was buggy
for non-locking rowmarks - the wrong tuple descriptor was used to
create the slot.
As a secondary issue 5db6df0c0 changed ExecLockRows() to begin EPQ
earlier, to allow to fetch the locked rows directly into the EPQ
slots, instead of having to copy tuples around. Unfortunately, as Tom
complained, that forces some expensive initialization to happen
earlier.
As a third issue, the test coverage for EPQ was clearly insufficient.
Fixing the first issue is unfortunately not trivial: Non-locked row
marks were fetched at the start of EPQ, and we don't have the type
information for the rowmarks available at that point. While we could
change that, it's not easy. It might be worthwhile to change that at
some point, but to fix this bug, it seems better to delay fetching
non-locking rowmarks when they're actually needed, rather than
eagerly. They're referenced at most once, and in cases where EPQ
fails, might never be referenced. Fetching them when needed also
increases locality a bit.
To be able to fetch rowmarks during execution, rather than
initialization, we need to be able to access the active EPQState, as
that contains necessary data. To do so move EPQ related data from
EState to EPQState, and, only for EStates creates as part of EPQ,
reference the associated EPQState from EState.
To fix the second issue, change EPQ initialization to allow use of
EvalPlanQualSlot() to be used before EvalPlanQualBegin() (but
obviously still requiring EvalPlanQualInit() to have been done).
As these changes made struct EState harder to understand, e.g. by
adding multiple EStates, significantly reorder the members, and add a
lot more comments.
Also add a few more EPQ tests, including one that fails for the first
issue above. More is needed.
Reported-By: yi huang
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion:
https://postgr.es/m/CAHU7rYZo_C4ULsAx_LAj8az9zqgrD8WDd4hTegDTMM1LMqrBsg@mail.gmail.com
https://postgr.es/m/24530.1562686693@sss.pgh.pa.us
Backpatch: 12-, where the EPQ changes were introduced
2019-09-05 22:00:20 +02:00
|
|
|
EPQState *epqstate = estate->es_epq_active;
|
Avoid crash during EvalPlanQual recheck of an inner indexscan.
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
2018-01-27 19:52:24 +01:00
|
|
|
|
Reorder EPQ work, to fix rowmark related bugs and improve efficiency.
In ad0bda5d24ea I changed the EvalPlanQual machinery to store
substitution tuples in slot, instead of using plain HeapTuples. The
main motivation for that was that using HeapTuples will be inefficient
for future tableams. But it turns out that that conversion was buggy
for non-locking rowmarks - the wrong tuple descriptor was used to
create the slot.
As a secondary issue 5db6df0c0 changed ExecLockRows() to begin EPQ
earlier, to allow to fetch the locked rows directly into the EPQ
slots, instead of having to copy tuples around. Unfortunately, as Tom
complained, that forces some expensive initialization to happen
earlier.
As a third issue, the test coverage for EPQ was clearly insufficient.
Fixing the first issue is unfortunately not trivial: Non-locked row
marks were fetched at the start of EPQ, and we don't have the type
information for the rowmarks available at that point. While we could
change that, it's not easy. It might be worthwhile to change that at
some point, but to fix this bug, it seems better to delay fetching
non-locking rowmarks when they're actually needed, rather than
eagerly. They're referenced at most once, and in cases where EPQ
fails, might never be referenced. Fetching them when needed also
increases locality a bit.
To be able to fetch rowmarks during execution, rather than
initialization, we need to be able to access the active EPQState, as
that contains necessary data. To do so move EPQ related data from
EState to EPQState, and, only for EStates creates as part of EPQ,
reference the associated EPQState from EState.
To fix the second issue, change EPQ initialization to allow use of
EvalPlanQualSlot() to be used before EvalPlanQualBegin() (but
obviously still requiring EvalPlanQualInit() to have been done).
As these changes made struct EState harder to understand, e.g. by
adding multiple EStates, significantly reorder the members, and add a
lot more comments.
Also add a few more EPQ tests, including one that fails for the first
issue above. More is needed.
Reported-By: yi huang
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion:
https://postgr.es/m/CAHU7rYZo_C4ULsAx_LAj8az9zqgrD8WDd4hTegDTMM1LMqrBsg@mail.gmail.com
https://postgr.es/m/24530.1562686693@sss.pgh.pa.us
Backpatch: 12-, where the EPQ changes were introduced
2019-09-05 22:00:20 +02:00
|
|
|
if (estate->es_epq_active != NULL)
|
Avoid crash during EvalPlanQual recheck of an inner indexscan.
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
2018-01-27 19:52:24 +01:00
|
|
|
{
|
|
|
|
/* See comments in ExecIndexMarkPos */
|
|
|
|
Index scanrelid = ((Scan *) node->ss.ps.plan)->scanrelid;
|
|
|
|
|
|
|
|
Assert(scanrelid > 0);
|
Reorder EPQ work, to fix rowmark related bugs and improve efficiency.
In ad0bda5d24ea I changed the EvalPlanQual machinery to store
substitution tuples in slot, instead of using plain HeapTuples. The
main motivation for that was that using HeapTuples will be inefficient
for future tableams. But it turns out that that conversion was buggy
for non-locking rowmarks - the wrong tuple descriptor was used to
create the slot.
As a secondary issue 5db6df0c0 changed ExecLockRows() to begin EPQ
earlier, to allow to fetch the locked rows directly into the EPQ
slots, instead of having to copy tuples around. Unfortunately, as Tom
complained, that forces some expensive initialization to happen
earlier.
As a third issue, the test coverage for EPQ was clearly insufficient.
Fixing the first issue is unfortunately not trivial: Non-locked row
marks were fetched at the start of EPQ, and we don't have the type
information for the rowmarks available at that point. While we could
change that, it's not easy. It might be worthwhile to change that at
some point, but to fix this bug, it seems better to delay fetching
non-locking rowmarks when they're actually needed, rather than
eagerly. They're referenced at most once, and in cases where EPQ
fails, might never be referenced. Fetching them when needed also
increases locality a bit.
To be able to fetch rowmarks during execution, rather than
initialization, we need to be able to access the active EPQState, as
that contains necessary data. To do so move EPQ related data from
EState to EPQState, and, only for EStates creates as part of EPQ,
reference the associated EPQState from EState.
To fix the second issue, change EPQ initialization to allow use of
EvalPlanQualSlot() to be used before EvalPlanQualBegin() (but
obviously still requiring EvalPlanQualInit() to have been done).
As these changes made struct EState harder to understand, e.g. by
adding multiple EStates, significantly reorder the members, and add a
lot more comments.
Also add a few more EPQ tests, including one that fails for the first
issue above. More is needed.
Reported-By: yi huang
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion:
https://postgr.es/m/CAHU7rYZo_C4ULsAx_LAj8az9zqgrD8WDd4hTegDTMM1LMqrBsg@mail.gmail.com
https://postgr.es/m/24530.1562686693@sss.pgh.pa.us
Backpatch: 12-, where the EPQ changes were introduced
2019-09-05 22:00:20 +02:00
|
|
|
if (epqstate->relsubs_slot[scanrelid - 1] != NULL ||
|
|
|
|
epqstate->relsubs_rowmark[scanrelid - 1] != NULL)
|
Avoid crash during EvalPlanQual recheck of an inner indexscan.
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
2018-01-27 19:52:24 +01:00
|
|
|
{
|
|
|
|
/* Verify the claim above */
|
Reorder EPQ work, to fix rowmark related bugs and improve efficiency.
In ad0bda5d24ea I changed the EvalPlanQual machinery to store
substitution tuples in slot, instead of using plain HeapTuples. The
main motivation for that was that using HeapTuples will be inefficient
for future tableams. But it turns out that that conversion was buggy
for non-locking rowmarks - the wrong tuple descriptor was used to
create the slot.
As a secondary issue 5db6df0c0 changed ExecLockRows() to begin EPQ
earlier, to allow to fetch the locked rows directly into the EPQ
slots, instead of having to copy tuples around. Unfortunately, as Tom
complained, that forces some expensive initialization to happen
earlier.
As a third issue, the test coverage for EPQ was clearly insufficient.
Fixing the first issue is unfortunately not trivial: Non-locked row
marks were fetched at the start of EPQ, and we don't have the type
information for the rowmarks available at that point. While we could
change that, it's not easy. It might be worthwhile to change that at
some point, but to fix this bug, it seems better to delay fetching
non-locking rowmarks when they're actually needed, rather than
eagerly. They're referenced at most once, and in cases where EPQ
fails, might never be referenced. Fetching them when needed also
increases locality a bit.
To be able to fetch rowmarks during execution, rather than
initialization, we need to be able to access the active EPQState, as
that contains necessary data. To do so move EPQ related data from
EState to EPQState, and, only for EStates creates as part of EPQ,
reference the associated EPQState from EState.
To fix the second issue, change EPQ initialization to allow use of
EvalPlanQualSlot() to be used before EvalPlanQualBegin() (but
obviously still requiring EvalPlanQualInit() to have been done).
As these changes made struct EState harder to understand, e.g. by
adding multiple EStates, significantly reorder the members, and add a
lot more comments.
Also add a few more EPQ tests, including one that fails for the first
issue above. More is needed.
Reported-By: yi huang
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion:
https://postgr.es/m/CAHU7rYZo_C4ULsAx_LAj8az9zqgrD8WDd4hTegDTMM1LMqrBsg@mail.gmail.com
https://postgr.es/m/24530.1562686693@sss.pgh.pa.us
Backpatch: 12-, where the EPQ changes were introduced
2019-09-05 22:00:20 +02:00
|
|
|
if (!epqstate->relsubs_done[scanrelid - 1])
|
Avoid crash during EvalPlanQual recheck of an inner indexscan.
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
2018-01-27 19:52:24 +01:00
|
|
|
elog(ERROR, "unexpected ExecIndexRestrPos call in EPQ recheck");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-04-25 03:30:14 +02:00
|
|
|
index_restrpos(node->iss_ScanDesc);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* ExecInitIndexScan
|
2003-08-22 22:26:43 +02:00
|
|
|
*
|
1996-07-09 08:22:35 +02:00
|
|
|
* Initializes the index scan's state information, creates
|
|
|
|
* scan keys, and opens the base and index relations.
|
|
|
|
*
|
|
|
|
* Note: index scans have 2 sets of state information because
|
|
|
|
* we have to keep track of the base relation and the
|
2005-04-25 03:30:14 +02:00
|
|
|
* index relation.
|
1996-07-09 08:22:35 +02:00
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
2002-12-05 16:50:39 +01:00
|
|
|
IndexScanState *
|
2006-02-28 05:10:28 +01:00
|
|
|
ExecInitIndexScan(IndexScan *node, EState *estate, int eflags)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
|
|
|
IndexScanState *indexstate;
|
|
|
|
Relation currentRelation;
|
Make queries' locking of indexes more consistent.
The assertions added by commit b04aeb0a0 exposed that there are some
code paths wherein the executor will try to open an index without
holding any lock on it. We do have some lock on the index's table,
so it seems likely that there's no fatal problem with this (for
instance, the index couldn't get dropped from under us). Still,
it's bad practice and we should fix it.
To do so, remove the optimizations in ExecInitIndexScan and friends
that tried to avoid taking a lock on an index belonging to a target
relation, and just take the lock always. In non-bug cases, this
will result in no additional shared-memory access, since we'll find
in the local lock table that we already have a lock of the desired
type; hence, no significant performance degradation should occur.
Also, adjust the planner and executor so that the type of lock taken
on an index is always identical to the type of lock taken for its table,
by relying on the recently added RangeTblEntry.rellockmode field.
This avoids some corner cases where that might not have been true
before (possibly resulting in extra locking overhead), and prevents
future maintenance issues from having multiple bits of logic that
all needed to be in sync. In addition, this change removes all core
calls to ExecRelationIsTargetRelation, which avoids a possible O(N^2)
startup penalty for queries with large numbers of target relations.
(We'd probably remove that function altogether, were it not that we
advertise it as something that FDWs might want to use.)
Also adjust some places in selfuncs.c to not take any lock on indexes
they are transiently opening, since we can assume that plancat.c
did that already.
In passing, change gin_clean_pending_list() to take RowExclusiveLock
not AccessShareLock on its target index. Although it's not clear that
that's actually a bug, it seemed very strange for a function that's
explicitly going to modify the index to use only AccessShareLock.
David Rowley, reviewed by Julien Rouhaud and Amit Langote,
a bit of further tweaking by me
Discussion: https://postgr.es/m/19465.1541636036@sss.pgh.pa.us
2019-04-04 21:12:51 +02:00
|
|
|
LOCKMODE lockmode;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2002-12-05 16:50:39 +01:00
|
|
|
* create state structure
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
2002-12-05 16:50:39 +01:00
|
|
|
indexstate = makeNode(IndexScanState);
|
|
|
|
indexstate->ss.ps.plan = (Plan *) node;
|
|
|
|
indexstate->ss.ps.state = estate;
|
2017-07-17 09:33:49 +02:00
|
|
|
indexstate->ss.ps.ExecProcNode = ExecIndexScan;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2002-12-05 16:50:39 +01:00
|
|
|
* Miscellaneous initialization
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
2002-12-05 16:50:39 +01:00
|
|
|
* create expression context for node
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
2002-12-05 16:50:39 +01:00
|
|
|
ExecAssignExprContext(estate, &indexstate->ss.ps);
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2018-02-17 06:17:38 +01:00
|
|
|
/*
|
2018-10-06 21:49:37 +02:00
|
|
|
* open the scan relation
|
2018-02-17 06:17:38 +01:00
|
|
|
*/
|
|
|
|
currentRelation = ExecOpenScanRelation(estate, node->scan.scanrelid, eflags);
|
|
|
|
|
|
|
|
indexstate->ss.ss_currentRelation = currentRelation;
|
|
|
|
indexstate->ss.ss_currentScanDesc = NULL; /* no heap scan here */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* get the scan type from the relation descriptor.
|
|
|
|
*/
|
|
|
|
ExecInitScanTupleSlot(estate, &indexstate->ss,
|
Introduce notion of different types of slots (without implementing them).
Upcoming work intends to allow pluggable ways to introduce new ways of
storing table data. Accessing those table access methods from the
executor requires TupleTableSlots to be carry tuples in the native
format of such storage methods; otherwise there'll be a significant
conversion overhead.
Different access methods will require different data to store tuples
efficiently (just like virtual, minimal, heap already require fields
in TupleTableSlot). To allow that without requiring additional pointer
indirections, we want to have different structs (embedding
TupleTableSlot) for different types of slots. Thus different types of
slots are needed, which requires adapting creators of slots.
The slot that most efficiently can represent a type of tuple in an
executor node will often depend on the type of slot a child node
uses. Therefore we need to track the type of slot is returned by
nodes, so parent slots can create slots based on that.
Relatedly, JIT compilation of tuple deforming needs to know which type
of slot a certain expression refers to, so it can create an
appropriate deforming function for the type of tuple in the slot.
But not all nodes will only return one type of slot, e.g. an append
node will potentially return different types of slots for each of its
subplans.
Therefore add function that allows to query the type of a node's
result slot, and whether it'll always be the same type (whether it's
fixed). This can be queried using ExecGetResultSlotOps().
The scan, result, inner, outer type of slots are automatically
inferred from ExecInitScanTupleSlot(), ExecInitResultSlot(),
left/right subtrees respectively. If that's not correct for a node,
that can be overwritten using new fields in PlanState.
This commit does not introduce the actually abstracted implementation
of different kind of TupleTableSlots, that will be left for a followup
commit. The different types of slots introduced will, for now, still
use the same backing implementation.
While this already partially invalidates the big comment in
tuptable.h, it seems to make more sense to update it later, when the
different TupleTableSlot implementations actually exist.
Author: Ashutosh Bapat and Andres Freund, with changes by Amit Khandekar
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-16 07:00:30 +01:00
|
|
|
RelationGetDescr(currentRelation),
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
table_slot_callbacks(currentRelation));
|
2018-02-17 06:17:38 +01:00
|
|
|
|
|
|
|
/*
|
Don't require return slots for nodes without projection.
In a lot of nodes the return slot is not required. That can either be
because the node doesn't do any projection (say an Append node), or
because the node does perform projections but the projection is
optimized away because the projection would yield an identical row.
Slots aren't that small, especially for wide rows, so it's worthwhile
to avoid creating them. It's not possible to just skip creating the
slot - it's currently used to determine the tuple descriptor returned
by ExecGetResultType(). So separate the determination of the result
type from the slot creation. The work previously done internally
ExecInitResultTupleSlotTL() can now also be done separately with
ExecInitResultTypeTL() and ExecInitResultSlot(). That way nodes that
aren't guaranteed to need a result slot, can use
ExecInitResultTypeTL() to determine the result type of the node, and
ExecAssignScanProjectionInfo() (via
ExecConditionalAssignProjectionInfo()) determines that a result slot
is needed, it is created with ExecInitResultSlot().
Besides the advantage of avoiding to create slots that then are
unused, this is necessary preparation for later patches around tuple
table slot abstraction. In particular separating the return descriptor
and slot is a prerequisite to allow JITing of tuple deforming with
knowledge of the underlying tuple format, and to avoid unnecessarily
creating JITed tuple deforming for virtual slots.
This commit removes a redundant argument from
ExecInitResultTupleSlotTL(). While this commit touches a lot of the
relevant lines anyway, it'd normally still not worthwhile to cause
breakage, except that aforementioned later commits will touch *all*
ExecInitResultTupleSlotTL() callers anyway (but fits worse
thematically).
Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-10 02:19:39 +01:00
|
|
|
* Initialize result type and projection.
|
2018-02-17 06:17:38 +01:00
|
|
|
*/
|
Don't require return slots for nodes without projection.
In a lot of nodes the return slot is not required. That can either be
because the node doesn't do any projection (say an Append node), or
because the node does perform projections but the projection is
optimized away because the projection would yield an identical row.
Slots aren't that small, especially for wide rows, so it's worthwhile
to avoid creating them. It's not possible to just skip creating the
slot - it's currently used to determine the tuple descriptor returned
by ExecGetResultType(). So separate the determination of the result
type from the slot creation. The work previously done internally
ExecInitResultTupleSlotTL() can now also be done separately with
ExecInitResultTypeTL() and ExecInitResultSlot(). That way nodes that
aren't guaranteed to need a result slot, can use
ExecInitResultTypeTL() to determine the result type of the node, and
ExecAssignScanProjectionInfo() (via
ExecConditionalAssignProjectionInfo()) determines that a result slot
is needed, it is created with ExecInitResultSlot().
Besides the advantage of avoiding to create slots that then are
unused, this is necessary preparation for later patches around tuple
table slot abstraction. In particular separating the return descriptor
and slot is a prerequisite to allow JITing of tuple deforming with
knowledge of the underlying tuple format, and to avoid unnecessarily
creating JITed tuple deforming for virtual slots.
This commit removes a redundant argument from
ExecInitResultTupleSlotTL(). While this commit touches a lot of the
relevant lines anyway, it'd normally still not worthwhile to cause
breakage, except that aforementioned later commits will touch *all*
ExecInitResultTupleSlotTL() callers anyway (but fits worse
thematically).
Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-10 02:19:39 +01:00
|
|
|
ExecInitResultTypeTL(&indexstate->ss.ps);
|
2018-02-17 06:17:38 +01:00
|
|
|
ExecAssignScanProjectionInfo(&indexstate->ss);
|
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2002-12-05 16:50:39 +01:00
|
|
|
* initialize child expressions
|
2004-02-28 20:46:06 +01:00
|
|
|
*
|
2005-04-25 03:30:14 +02:00
|
|
|
* Note: we don't initialize all of the indexqual expression, only the
|
2010-12-03 02:50:48 +01:00
|
|
|
* sub-parts corresponding to runtime keys (see below). Likewise for
|
|
|
|
* indexorderby, if any. But the indexqualorig expression is always
|
|
|
|
* initialized even though it will only be used in some uncommon cases ---
|
|
|
|
* would be nice to improve that. (Problem is that any SubPlans present
|
|
|
|
* in the expression must be found now...)
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
Faster expression evaluation and targetlist projection.
This replaces the old, recursive tree-walk based evaluation, with
non-recursive, opcode dispatch based, expression evaluation.
Projection is now implemented as part of expression evaluation.
This both leads to significant performance improvements, and makes
future just-in-time compilation of expressions easier.
The speed gains primarily come from:
- non-recursive implementation reduces stack usage / overhead
- simple sub-expressions are implemented with a single jump, without
function calls
- sharing some state between different sub-expressions
- reduced amount of indirect/hard to predict memory accesses by laying
out operation metadata sequentially; including the avoidance of
nearly all of the previously used linked lists
- more code has been moved to expression initialization, avoiding
constant re-checks at evaluation time
Future just-in-time compilation (JIT) has become easier, as
demonstrated by released patches intended to be merged in a later
release, for primarily two reasons: Firstly, due to a stricter split
between expression initialization and evaluation, less code has to be
handled by the JIT. Secondly, due to the non-recursive nature of the
generated "instructions", less performance-critical code-paths can
easily be shared between interpreted and compiled evaluation.
The new framework allows for significant future optimizations. E.g.:
- basic infrastructure for to later reduce the per executor-startup
overhead of expression evaluation, by caching state in prepared
statements. That'd be helpful in OLTPish scenarios where
initialization overhead is measurable.
- optimizing the generated "code". A number of proposals for potential
work has already been made.
- optimizing the interpreter. Similarly a number of proposals have
been made here too.
The move of logic into the expression initialization step leads to some
backward-incompatible changes:
- Function permission checks are now done during expression
initialization, whereas previously they were done during
execution. In edge cases this can lead to errors being raised that
previously wouldn't have been, e.g. a NULL array being coerced to a
different array type previously didn't perform checks.
- The set of domain constraints to be checked, is now evaluated once
during expression initialization, previously it was re-built
every time a domain check was evaluated. For normal queries this
doesn't change much, but e.g. for plpgsql functions, which caches
ExprStates, the old set could stick around longer. The behavior
around might still change.
Author: Andres Freund, with significant changes by Tom Lane,
changes by Heikki Linnakangas
Reviewed-By: Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/20161206034955.bh33paeralxbtluv@alap3.anarazel.de
2017-03-14 23:45:36 +01:00
|
|
|
indexstate->ss.ps.qual =
|
|
|
|
ExecInitQual(node->scan.plan.qual, (PlanState *) indexstate);
|
|
|
|
indexstate->indexqualorig =
|
|
|
|
ExecInitQual(node->indexqualorig, (PlanState *) indexstate);
|
|
|
|
indexstate->indexorderbyorig =
|
|
|
|
ExecInitExprList(node->indexorderbyorig, (PlanState *) indexstate);
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2007-05-25 19:54:25 +02:00
|
|
|
/*
|
|
|
|
* If we are just doing EXPLAIN (ie, aren't going to run the plan), stop
|
|
|
|
* here. This allows an index-advisor plugin to EXPLAIN a plan containing
|
|
|
|
* references to nonexistent indexes.
|
|
|
|
*/
|
|
|
|
if (eflags & EXEC_FLAG_EXPLAIN_ONLY)
|
|
|
|
return indexstate;
|
|
|
|
|
Make queries' locking of indexes more consistent.
The assertions added by commit b04aeb0a0 exposed that there are some
code paths wherein the executor will try to open an index without
holding any lock on it. We do have some lock on the index's table,
so it seems likely that there's no fatal problem with this (for
instance, the index couldn't get dropped from under us). Still,
it's bad practice and we should fix it.
To do so, remove the optimizations in ExecInitIndexScan and friends
that tried to avoid taking a lock on an index belonging to a target
relation, and just take the lock always. In non-bug cases, this
will result in no additional shared-memory access, since we'll find
in the local lock table that we already have a lock of the desired
type; hence, no significant performance degradation should occur.
Also, adjust the planner and executor so that the type of lock taken
on an index is always identical to the type of lock taken for its table,
by relying on the recently added RangeTblEntry.rellockmode field.
This avoids some corner cases where that might not have been true
before (possibly resulting in extra locking overhead), and prevents
future maintenance issues from having multiple bits of logic that
all needed to be in sync. In addition, this change removes all core
calls to ExecRelationIsTargetRelation, which avoids a possible O(N^2)
startup penalty for queries with large numbers of target relations.
(We'd probably remove that function altogether, were it not that we
advertise it as something that FDWs might want to use.)
Also adjust some places in selfuncs.c to not take any lock on indexes
they are transiently opening, since we can assume that plancat.c
did that already.
In passing, change gin_clean_pending_list() to take RowExclusiveLock
not AccessShareLock on its target index. Although it's not clear that
that's actually a bug, it seemed very strange for a function that's
explicitly going to modify the index to use only AccessShareLock.
David Rowley, reviewed by Julien Rouhaud and Amit Langote,
a bit of further tweaking by me
Discussion: https://postgr.es/m/19465.1541636036@sss.pgh.pa.us
2019-04-04 21:12:51 +02:00
|
|
|
/* Open the index relation. */
|
|
|
|
lockmode = exec_rt_fetch(node->scan.scanrelid, estate)->rellockmode;
|
|
|
|
indexstate->iss_RelationDesc = index_open(node->indexid, lockmode);
|
2006-01-25 21:29:24 +01:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2002-12-05 16:50:39 +01:00
|
|
|
* Initialize index-specific scan state
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
2000-08-13 04:50:35 +02:00
|
|
|
indexstate->iss_RuntimeKeysReady = false;
|
2010-12-03 02:50:48 +01:00
|
|
|
indexstate->iss_RuntimeKeys = NULL;
|
|
|
|
indexstate->iss_NumRuntimeKeys = 0;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
|
|
|
* build the index scan keys from the index qualification
|
|
|
|
*/
|
2005-11-25 20:47:50 +01:00
|
|
|
ExecIndexBuildScanKeys((PlanState *) indexstate,
|
2006-01-25 21:29:24 +01:00
|
|
|
indexstate->iss_RelationDesc,
|
2005-11-25 20:47:50 +01:00
|
|
|
node->indexqual,
|
2010-12-03 02:50:48 +01:00
|
|
|
false,
|
2005-11-25 20:47:50 +01:00
|
|
|
&indexstate->iss_ScanKeys,
|
|
|
|
&indexstate->iss_NumScanKeys,
|
|
|
|
&indexstate->iss_RuntimeKeys,
|
|
|
|
&indexstate->iss_NumRuntimeKeys,
|
|
|
|
NULL, /* no ArrayKeys */
|
|
|
|
NULL);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2010-12-03 02:50:48 +01:00
|
|
|
/*
|
|
|
|
* any ORDER BY exprs have to be turned into scankeys in the same way
|
|
|
|
*/
|
|
|
|
ExecIndexBuildScanKeys((PlanState *) indexstate,
|
|
|
|
indexstate->iss_RelationDesc,
|
|
|
|
node->indexorderby,
|
|
|
|
true,
|
|
|
|
&indexstate->iss_OrderByKeys,
|
|
|
|
&indexstate->iss_NumOrderByKeys,
|
|
|
|
&indexstate->iss_RuntimeKeys,
|
|
|
|
&indexstate->iss_NumRuntimeKeys,
|
|
|
|
NULL, /* no ArrayKeys */
|
|
|
|
NULL);
|
|
|
|
|
2015-05-15 13:26:51 +02:00
|
|
|
/* Initialize sort support, if we need to re-check ORDER BY exprs */
|
|
|
|
if (indexstate->iss_NumOrderByKeys > 0)
|
|
|
|
{
|
|
|
|
int numOrderByKeys = indexstate->iss_NumOrderByKeys;
|
2015-05-18 03:22:12 +02:00
|
|
|
int i;
|
2015-05-22 01:47:48 +02:00
|
|
|
ListCell *lco;
|
|
|
|
ListCell *lcx;
|
2015-05-15 13:26:51 +02:00
|
|
|
|
|
|
|
/*
|
2015-05-22 01:47:48 +02:00
|
|
|
* Prepare sort support, and look up the data type for each ORDER BY
|
|
|
|
* expression.
|
2015-05-15 13:26:51 +02:00
|
|
|
*/
|
2015-05-18 03:22:12 +02:00
|
|
|
Assert(numOrderByKeys == list_length(node->indexorderbyops));
|
2015-05-22 01:47:48 +02:00
|
|
|
Assert(numOrderByKeys == list_length(node->indexorderbyorig));
|
|
|
|
indexstate->iss_SortSupport = (SortSupportData *)
|
2015-05-15 13:26:51 +02:00
|
|
|
palloc0(numOrderByKeys * sizeof(SortSupportData));
|
2015-05-22 01:47:48 +02:00
|
|
|
indexstate->iss_OrderByTypByVals = (bool *)
|
2015-05-15 13:26:51 +02:00
|
|
|
palloc(numOrderByKeys * sizeof(bool));
|
2015-05-22 01:47:48 +02:00
|
|
|
indexstate->iss_OrderByTypLens = (int16 *)
|
2015-05-15 13:26:51 +02:00
|
|
|
palloc(numOrderByKeys * sizeof(int16));
|
2015-05-18 03:22:12 +02:00
|
|
|
i = 0;
|
2015-05-22 01:47:48 +02:00
|
|
|
forboth(lco, node->indexorderbyops, lcx, node->indexorderbyorig)
|
2015-05-15 13:26:51 +02:00
|
|
|
{
|
2015-05-22 01:47:48 +02:00
|
|
|
Oid orderbyop = lfirst_oid(lco);
|
|
|
|
Node *orderbyexpr = (Node *) lfirst(lcx);
|
|
|
|
Oid orderbyType = exprType(orderbyexpr);
|
2016-06-05 17:53:06 +02:00
|
|
|
Oid orderbyColl = exprCollation(orderbyexpr);
|
|
|
|
SortSupport orderbysort = &indexstate->iss_SortSupport[i];
|
|
|
|
|
|
|
|
/* Initialize sort support */
|
|
|
|
orderbysort->ssup_cxt = CurrentMemoryContext;
|
|
|
|
orderbysort->ssup_collation = orderbyColl;
|
|
|
|
/* See cmp_orderbyvals() comments on NULLS LAST */
|
|
|
|
orderbysort->ssup_nulls_first = false;
|
|
|
|
/* ssup_attno is unused here and elsewhere */
|
|
|
|
orderbysort->ssup_attno = 0;
|
|
|
|
/* No abbreviation */
|
|
|
|
orderbysort->abbreviate = false;
|
|
|
|
PrepareSortSupportFromOrderingOp(orderbyop, orderbysort);
|
2015-05-15 13:26:51 +02:00
|
|
|
|
|
|
|
get_typlenbyval(orderbyType,
|
|
|
|
&indexstate->iss_OrderByTypLens[i],
|
|
|
|
&indexstate->iss_OrderByTypByVals[i]);
|
2015-05-18 03:22:12 +02:00
|
|
|
i++;
|
2015-05-15 13:26:51 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* allocate arrays to hold the re-calculated distances */
|
2015-05-22 01:47:48 +02:00
|
|
|
indexstate->iss_OrderByValues = (Datum *)
|
|
|
|
palloc(numOrderByKeys * sizeof(Datum));
|
|
|
|
indexstate->iss_OrderByNulls = (bool *)
|
|
|
|
palloc(numOrderByKeys * sizeof(bool));
|
2015-05-15 13:26:51 +02:00
|
|
|
|
2019-04-19 20:25:48 +02:00
|
|
|
/* and initialize the reorder queue */
|
2015-05-15 13:26:51 +02:00
|
|
|
indexstate->iss_ReorderQueue = pairingheap_allocate(reorderqueue_cmp,
|
|
|
|
indexstate);
|
|
|
|
}
|
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2005-04-25 03:30:14 +02:00
|
|
|
* If we have runtime keys, we need an ExprContext to evaluate them. The
|
|
|
|
* node's standard context won't do because we want to reset that context
|
2000-07-12 04:37:39 +02:00
|
|
|
* for every tuple. So, build another context just like the other one...
|
|
|
|
* -tgl 7/11/00
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
2005-11-25 20:47:50 +01:00
|
|
|
if (indexstate->iss_NumRuntimeKeys != 0)
|
2000-07-12 04:37:39 +02:00
|
|
|
{
|
2002-12-05 16:50:39 +01:00
|
|
|
ExprContext *stdecontext = indexstate->ss.ps.ps_ExprContext;
|
2000-07-12 04:37:39 +02:00
|
|
|
|
2002-12-05 16:50:39 +01:00
|
|
|
ExecAssignExprContext(estate, &indexstate->ss.ps);
|
|
|
|
indexstate->iss_RuntimeContext = indexstate->ss.ps.ps_ExprContext;
|
|
|
|
indexstate->ss.ps.ps_ExprContext = stdecontext;
|
2000-07-12 04:37:39 +02:00
|
|
|
}
|
1996-07-09 08:22:35 +02:00
|
|
|
else
|
2000-07-12 04:37:39 +02:00
|
|
|
{
|
|
|
|
indexstate->iss_RuntimeContext = NULL;
|
|
|
|
}
|
1997-09-07 07:04:48 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
|
|
|
* all done.
|
|
|
|
*/
|
2002-12-05 16:50:39 +01:00
|
|
|
return indexstate;
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
2005-04-25 03:30:14 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* ExecIndexBuildScanKeys
|
2005-11-25 20:47:50 +01:00
|
|
|
* Build the index scan keys from the index qualification expressions
|
|
|
|
*
|
|
|
|
* The index quals are passed to the index AM in the form of a ScanKey array.
|
|
|
|
* This routine sets up the ScanKeys, fills in all constant fields of the
|
|
|
|
* ScanKeys, and prepares information about the keys that have non-constant
|
2007-04-07 00:33:43 +02:00
|
|
|
* comparison values. We divide index qual expressions into five types:
|
2005-11-25 20:47:50 +01:00
|
|
|
*
|
|
|
|
* 1. Simple operator with constant comparison value ("indexkey op constant").
|
|
|
|
* For these, we just fill in a ScanKey containing the constant value.
|
|
|
|
*
|
|
|
|
* 2. Simple operator with non-constant value ("indexkey op expression").
|
|
|
|
* For these, we create a ScanKey with everything filled in except the
|
|
|
|
* expression value, and set up an IndexRuntimeKeyInfo struct to drive
|
|
|
|
* evaluation of the expression at the right times.
|
|
|
|
*
|
2006-01-25 21:29:24 +01:00
|
|
|
* 3. RowCompareExpr ("(indexkey, indexkey, ...) op (expr, expr, ...)").
|
|
|
|
* For these, we create a header ScanKey plus a subsidiary ScanKey array,
|
|
|
|
* as specified in access/skey.h. The elements of the row comparison
|
|
|
|
* can have either constant or non-constant comparison values.
|
|
|
|
*
|
2011-10-16 21:39:24 +02:00
|
|
|
* 4. ScalarArrayOpExpr ("indexkey op ANY (array-expression)"). If the index
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
* supports amsearcharray, we handle these the same as simple operators,
|
2011-10-16 21:39:24 +02:00
|
|
|
* setting the SK_SEARCHARRAY flag to tell the AM to handle them. Otherwise,
|
2005-11-25 20:47:50 +01:00
|
|
|
* we create a ScanKey with everything filled in except the comparison value,
|
|
|
|
* and set up an IndexArrayKeyInfo struct to drive processing of the qual.
|
2011-10-16 21:39:24 +02:00
|
|
|
* (Note that if we use an IndexArrayKeyInfo struct, the array expression is
|
|
|
|
* always treated as requiring runtime evaluation, even if it's a constant.)
|
2005-04-25 03:30:14 +02:00
|
|
|
*
|
2010-01-01 22:53:49 +01:00
|
|
|
* 5. NullTest ("indexkey IS NULL/IS NOT NULL"). We just fill in the
|
|
|
|
* ScanKey properly.
|
2007-04-07 00:33:43 +02:00
|
|
|
*
|
2010-12-03 02:50:48 +01:00
|
|
|
* This code is also used to prepare ORDER BY expressions for amcanorderbyop
|
|
|
|
* indexes. The behavior is exactly the same, except that we have to look up
|
|
|
|
* the operator differently. Note that only cases 1 and 2 are currently
|
|
|
|
* possible for ORDER BY.
|
|
|
|
*
|
2005-04-25 03:30:14 +02:00
|
|
|
* Input params are:
|
|
|
|
*
|
|
|
|
* planstate: executor state node we are working for
|
2006-01-25 21:29:24 +01:00
|
|
|
* index: the index we are building scan keys for
|
2010-12-03 02:50:48 +01:00
|
|
|
* quals: indexquals (or indexorderbys) expressions
|
|
|
|
* isorderby: true if processing ORDER BY exprs, false if processing quals
|
|
|
|
* *runtimeKeys: ptr to pre-existing IndexRuntimeKeyInfos, or NULL if none
|
|
|
|
* *numRuntimeKeys: number of pre-existing runtime keys
|
2006-01-25 21:29:24 +01:00
|
|
|
*
|
2005-04-25 03:30:14 +02:00
|
|
|
* Output params are:
|
|
|
|
*
|
|
|
|
* *scanKeys: receives ptr to array of ScanKeys
|
2005-11-25 20:47:50 +01:00
|
|
|
* *numScanKeys: receives number of scankeys
|
|
|
|
* *runtimeKeys: receives ptr to array of IndexRuntimeKeyInfos, or NULL if none
|
|
|
|
* *numRuntimeKeys: receives number of runtime keys
|
|
|
|
* *arrayKeys: receives ptr to array of IndexArrayKeyInfos, or NULL if none
|
|
|
|
* *numArrayKeys: receives number of array keys
|
2005-04-25 03:30:14 +02:00
|
|
|
*
|
2005-11-25 20:47:50 +01:00
|
|
|
* Caller may pass NULL for arrayKeys and numArrayKeys to indicate that
|
2011-10-16 21:39:24 +02:00
|
|
|
* IndexArrayKeyInfos are not supported.
|
2005-04-25 03:30:14 +02:00
|
|
|
*/
|
2005-11-25 20:47:50 +01:00
|
|
|
void
|
2011-10-11 20:20:06 +02:00
|
|
|
ExecIndexBuildScanKeys(PlanState *planstate, Relation index,
|
2010-12-03 02:50:48 +01:00
|
|
|
List *quals, bool isorderby,
|
|
|
|
ScanKey *scanKeys, int *numScanKeys,
|
2005-11-25 20:47:50 +01:00
|
|
|
IndexRuntimeKeyInfo **runtimeKeys, int *numRuntimeKeys,
|
|
|
|
IndexArrayKeyInfo **arrayKeys, int *numArrayKeys)
|
2003-08-22 22:26:43 +02:00
|
|
|
{
|
2005-04-25 03:30:14 +02:00
|
|
|
ListCell *qual_cell;
|
|
|
|
ScanKey scan_keys;
|
2005-11-25 20:47:50 +01:00
|
|
|
IndexRuntimeKeyInfo *runtime_keys;
|
|
|
|
IndexArrayKeyInfo *array_keys;
|
|
|
|
int n_scan_keys;
|
|
|
|
int n_runtime_keys;
|
2010-12-03 02:50:48 +01:00
|
|
|
int max_runtime_keys;
|
2005-11-25 20:47:50 +01:00
|
|
|
int n_array_keys;
|
2005-04-25 03:30:14 +02:00
|
|
|
int j;
|
|
|
|
|
2010-12-03 02:50:48 +01:00
|
|
|
/* Allocate array for ScanKey structs: one per qual */
|
|
|
|
n_scan_keys = list_length(quals);
|
|
|
|
scan_keys = (ScanKey) palloc(n_scan_keys * sizeof(ScanKeyData));
|
|
|
|
|
2006-01-25 21:29:24 +01:00
|
|
|
/*
|
2010-12-03 02:50:48 +01:00
|
|
|
* runtime_keys array is dynamically resized as needed. We handle it this
|
|
|
|
* way so that the same runtime keys array can be shared between
|
|
|
|
* indexquals and indexorderbys, which will be processed in separate calls
|
|
|
|
* of this function. Caller must be sure to pass in NULL/0 for first
|
|
|
|
* call.
|
2006-01-25 21:29:24 +01:00
|
|
|
*/
|
2010-12-03 02:50:48 +01:00
|
|
|
runtime_keys = *runtimeKeys;
|
|
|
|
n_runtime_keys = max_runtime_keys = *numRuntimeKeys;
|
|
|
|
|
|
|
|
/* Allocate array_keys as large as it could possibly need to be */
|
2005-11-25 20:47:50 +01:00
|
|
|
array_keys = (IndexArrayKeyInfo *)
|
|
|
|
palloc0(n_scan_keys * sizeof(IndexArrayKeyInfo));
|
|
|
|
n_array_keys = 0;
|
2005-04-25 03:30:14 +02:00
|
|
|
|
|
|
|
/*
|
2008-04-13 22:51:21 +02:00
|
|
|
* for each opclause in the given qual, convert the opclause into a single
|
2005-04-25 03:30:14 +02:00
|
|
|
* scan key
|
|
|
|
*/
|
2008-04-13 22:51:21 +02:00
|
|
|
j = 0;
|
|
|
|
foreach(qual_cell, quals)
|
2005-04-25 03:30:14 +02:00
|
|
|
{
|
2008-04-13 22:51:21 +02:00
|
|
|
Expr *clause = (Expr *) lfirst(qual_cell);
|
|
|
|
ScanKey this_scan_key = &scan_keys[j++];
|
|
|
|
Oid opno; /* operator's OID */
|
2005-11-25 20:47:50 +01:00
|
|
|
RegProcedure opfuncid; /* operator proc id used in scan */
|
2008-04-13 22:51:21 +02:00
|
|
|
Oid opfamily; /* opfamily of index column */
|
|
|
|
int op_strategy; /* operator's strategy number */
|
|
|
|
Oid op_lefttype; /* operator's declared input types */
|
|
|
|
Oid op_righttype;
|
2005-04-25 03:30:14 +02:00
|
|
|
Expr *leftop; /* expr on lhs of operator */
|
|
|
|
Expr *rightop; /* expr on rhs ... */
|
|
|
|
AttrNumber varattno; /* att number used in scan */
|
2018-04-07 22:00:39 +02:00
|
|
|
int indnkeyatts;
|
2005-04-25 03:30:14 +02:00
|
|
|
|
2018-04-07 22:00:39 +02:00
|
|
|
indnkeyatts = IndexRelationGetNumberOfKeyAttributes(index);
|
2005-11-25 20:47:50 +01:00
|
|
|
if (IsA(clause, OpExpr))
|
|
|
|
{
|
|
|
|
/* indexkey op const or indexkey op expression */
|
|
|
|
int flags = 0;
|
|
|
|
Datum scanvalue;
|
2005-04-25 03:30:14 +02:00
|
|
|
|
2008-04-13 22:51:21 +02:00
|
|
|
opno = ((OpExpr *) clause)->opno;
|
2005-11-25 20:47:50 +01:00
|
|
|
opfuncid = ((OpExpr *) clause)->opfuncid;
|
2005-04-25 03:30:14 +02:00
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
/*
|
|
|
|
* leftop should be the index key Var, possibly relabeled
|
|
|
|
*/
|
|
|
|
leftop = (Expr *) get_leftop(clause);
|
2005-04-25 03:30:14 +02:00
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
if (leftop && IsA(leftop, RelabelType))
|
|
|
|
leftop = ((RelabelType *) leftop)->arg;
|
2005-04-25 03:30:14 +02:00
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
Assert(leftop != NULL);
|
2005-04-25 03:30:14 +02:00
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
if (!(IsA(leftop, Var) &&
|
2011-10-11 20:20:06 +02:00
|
|
|
((Var *) leftop)->varno == INDEX_VAR))
|
2005-11-25 20:47:50 +01:00
|
|
|
elog(ERROR, "indexqual doesn't have key on left side");
|
2005-04-25 03:30:14 +02:00
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
varattno = ((Var *) leftop)->varattno;
|
2018-04-07 22:00:39 +02:00
|
|
|
if (varattno < 1 || varattno > indnkeyatts)
|
2008-04-13 22:51:21 +02:00
|
|
|
elog(ERROR, "bogus index qualification");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We have to look up the operator's strategy number. This
|
|
|
|
* provides a cross-check that the operator does match the index.
|
|
|
|
*/
|
|
|
|
opfamily = index->rd_opfamily[varattno - 1];
|
|
|
|
|
2010-12-03 02:50:48 +01:00
|
|
|
get_op_opfamily_properties(opno, opfamily, isorderby,
|
2008-04-13 22:51:21 +02:00
|
|
|
&op_strategy,
|
|
|
|
&op_lefttype,
|
|
|
|
&op_righttype);
|
2005-04-25 03:30:14 +02:00
|
|
|
|
2010-12-03 02:50:48 +01:00
|
|
|
if (isorderby)
|
|
|
|
flags |= SK_ORDER_BY;
|
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
/*
|
|
|
|
* rightop is the constant or variable comparison value
|
|
|
|
*/
|
|
|
|
rightop = (Expr *) get_rightop(clause);
|
2005-04-25 03:30:14 +02:00
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
if (rightop && IsA(rightop, RelabelType))
|
|
|
|
rightop = ((RelabelType *) rightop)->arg;
|
2005-04-25 03:30:14 +02:00
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
Assert(rightop != NULL);
|
2005-04-25 03:30:14 +02:00
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
if (IsA(rightop, Const))
|
|
|
|
{
|
|
|
|
/* OK, simple constant comparison value */
|
|
|
|
scanvalue = ((Const *) rightop)->constvalue;
|
|
|
|
if (((Const *) rightop)->constisnull)
|
|
|
|
flags |= SK_ISNULL;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Need to treat this one as a runtime key */
|
2010-12-03 02:50:48 +01:00
|
|
|
if (n_runtime_keys >= max_runtime_keys)
|
|
|
|
{
|
|
|
|
if (max_runtime_keys == 0)
|
|
|
|
{
|
|
|
|
max_runtime_keys = 8;
|
|
|
|
runtime_keys = (IndexRuntimeKeyInfo *)
|
|
|
|
palloc(max_runtime_keys * sizeof(IndexRuntimeKeyInfo));
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
max_runtime_keys *= 2;
|
|
|
|
runtime_keys = (IndexRuntimeKeyInfo *)
|
|
|
|
repalloc(runtime_keys, max_runtime_keys * sizeof(IndexRuntimeKeyInfo));
|
|
|
|
}
|
|
|
|
}
|
2005-11-25 20:47:50 +01:00
|
|
|
runtime_keys[n_runtime_keys].scan_key = this_scan_key;
|
|
|
|
runtime_keys[n_runtime_keys].key_expr =
|
|
|
|
ExecInitExpr(rightop, planstate);
|
2009-08-23 20:26:08 +02:00
|
|
|
runtime_keys[n_runtime_keys].key_toastable =
|
|
|
|
TypeIsToastable(op_righttype);
|
2005-11-25 20:47:50 +01:00
|
|
|
n_runtime_keys++;
|
|
|
|
scanvalue = (Datum) 0;
|
|
|
|
}
|
2005-04-25 03:30:14 +02:00
|
|
|
|
|
|
|
/*
|
2005-11-25 20:47:50 +01:00
|
|
|
* initialize the scan key's fields appropriately
|
2005-04-25 03:30:14 +02:00
|
|
|
*/
|
2005-11-25 20:47:50 +01:00
|
|
|
ScanKeyEntryInitialize(this_scan_key,
|
|
|
|
flags,
|
|
|
|
varattno, /* attribute number to scan */
|
2008-04-13 22:51:21 +02:00
|
|
|
op_strategy, /* op's strategy */
|
|
|
|
op_righttype, /* strategy subtype */
|
2011-03-26 23:28:40 +01:00
|
|
|
((OpExpr *) clause)->inputcollid, /* collation */
|
2005-11-25 20:47:50 +01:00
|
|
|
opfuncid, /* reg proc to use */
|
|
|
|
scanvalue); /* constant */
|
2005-04-25 03:30:14 +02:00
|
|
|
}
|
2006-01-25 21:29:24 +01:00
|
|
|
else if (IsA(clause, RowCompareExpr))
|
|
|
|
{
|
|
|
|
/* (indexkey, indexkey, ...) op (expression, expression, ...) */
|
|
|
|
RowCompareExpr *rc = (RowCompareExpr *) clause;
|
2010-12-03 02:50:48 +01:00
|
|
|
ScanKey first_sub_key;
|
|
|
|
int n_sub_key;
|
2019-02-28 20:25:01 +01:00
|
|
|
ListCell *largs_cell;
|
|
|
|
ListCell *rargs_cell;
|
|
|
|
ListCell *opnos_cell;
|
|
|
|
ListCell *collids_cell;
|
2010-12-03 02:50:48 +01:00
|
|
|
|
|
|
|
Assert(!isorderby);
|
|
|
|
|
|
|
|
first_sub_key = (ScanKey)
|
|
|
|
palloc(list_length(rc->opnos) * sizeof(ScanKeyData));
|
|
|
|
n_sub_key = 0;
|
2006-01-25 21:29:24 +01:00
|
|
|
|
|
|
|
/* Scan RowCompare columns and generate subsidiary ScanKey items */
|
2019-02-28 20:25:01 +01:00
|
|
|
forfour(largs_cell, rc->largs, rargs_cell, rc->rargs,
|
|
|
|
opnos_cell, rc->opnos, collids_cell, rc->inputcollids)
|
2006-01-25 21:29:24 +01:00
|
|
|
{
|
2010-12-03 02:50:48 +01:00
|
|
|
ScanKey this_sub_key = &first_sub_key[n_sub_key];
|
2006-01-25 21:29:24 +01:00
|
|
|
int flags = SK_ROW_MEMBER;
|
|
|
|
Datum scanvalue;
|
2011-03-20 01:29:08 +01:00
|
|
|
Oid inputcollation;
|
2006-01-25 21:29:24 +01:00
|
|
|
|
2019-02-28 20:25:01 +01:00
|
|
|
leftop = (Expr *) lfirst(largs_cell);
|
|
|
|
rightop = (Expr *) lfirst(rargs_cell);
|
|
|
|
opno = lfirst_oid(opnos_cell);
|
|
|
|
inputcollation = lfirst_oid(collids_cell);
|
|
|
|
|
2006-01-25 21:29:24 +01:00
|
|
|
/*
|
|
|
|
* leftop should be the index key Var, possibly relabeled
|
|
|
|
*/
|
|
|
|
if (leftop && IsA(leftop, RelabelType))
|
|
|
|
leftop = ((RelabelType *) leftop)->arg;
|
|
|
|
|
|
|
|
Assert(leftop != NULL);
|
|
|
|
|
|
|
|
if (!(IsA(leftop, Var) &&
|
2011-10-11 20:20:06 +02:00
|
|
|
((Var *) leftop)->varno == INDEX_VAR))
|
2006-01-25 21:29:24 +01:00
|
|
|
elog(ERROR, "indexqual doesn't have key on left side");
|
|
|
|
|
|
|
|
varattno = ((Var *) leftop)->varattno;
|
|
|
|
|
2009-08-23 20:26:08 +02:00
|
|
|
/*
|
|
|
|
* We have to look up the operator's associated btree support
|
|
|
|
* function
|
|
|
|
*/
|
|
|
|
if (index->rd_rel->relam != BTREE_AM_OID ||
|
2018-04-07 22:00:39 +02:00
|
|
|
varattno < 1 || varattno > indnkeyatts)
|
2009-08-23 20:26:08 +02:00
|
|
|
elog(ERROR, "bogus RowCompare index qualification");
|
|
|
|
opfamily = index->rd_opfamily[varattno - 1];
|
|
|
|
|
2010-12-03 02:50:48 +01:00
|
|
|
get_op_opfamily_properties(opno, opfamily, isorderby,
|
2009-08-23 20:26:08 +02:00
|
|
|
&op_strategy,
|
|
|
|
&op_lefttype,
|
|
|
|
&op_righttype);
|
|
|
|
|
|
|
|
if (op_strategy != rc->rctype)
|
|
|
|
elog(ERROR, "RowCompare index qualification contains wrong operator");
|
|
|
|
|
|
|
|
opfuncid = get_opfamily_proc(opfamily,
|
|
|
|
op_lefttype,
|
|
|
|
op_righttype,
|
|
|
|
BTORDER_PROC);
|
2017-07-24 17:23:27 +02:00
|
|
|
if (!RegProcedureIsValid(opfuncid))
|
|
|
|
elog(ERROR, "missing support function %d(%u,%u) in opfamily %u",
|
|
|
|
BTORDER_PROC, op_lefttype, op_righttype, opfamily);
|
2009-08-23 20:26:08 +02:00
|
|
|
|
2006-01-25 21:29:24 +01:00
|
|
|
/*
|
|
|
|
* rightop is the constant or variable comparison value
|
|
|
|
*/
|
|
|
|
if (rightop && IsA(rightop, RelabelType))
|
|
|
|
rightop = ((RelabelType *) rightop)->arg;
|
|
|
|
|
|
|
|
Assert(rightop != NULL);
|
|
|
|
|
|
|
|
if (IsA(rightop, Const))
|
|
|
|
{
|
|
|
|
/* OK, simple constant comparison value */
|
|
|
|
scanvalue = ((Const *) rightop)->constvalue;
|
|
|
|
if (((Const *) rightop)->constisnull)
|
|
|
|
flags |= SK_ISNULL;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Need to treat this one as a runtime key */
|
2010-12-03 02:50:48 +01:00
|
|
|
if (n_runtime_keys >= max_runtime_keys)
|
|
|
|
{
|
|
|
|
if (max_runtime_keys == 0)
|
|
|
|
{
|
|
|
|
max_runtime_keys = 8;
|
|
|
|
runtime_keys = (IndexRuntimeKeyInfo *)
|
|
|
|
palloc(max_runtime_keys * sizeof(IndexRuntimeKeyInfo));
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
max_runtime_keys *= 2;
|
|
|
|
runtime_keys = (IndexRuntimeKeyInfo *)
|
|
|
|
repalloc(runtime_keys, max_runtime_keys * sizeof(IndexRuntimeKeyInfo));
|
|
|
|
}
|
|
|
|
}
|
2006-01-25 21:29:24 +01:00
|
|
|
runtime_keys[n_runtime_keys].scan_key = this_sub_key;
|
|
|
|
runtime_keys[n_runtime_keys].key_expr =
|
|
|
|
ExecInitExpr(rightop, planstate);
|
2009-08-23 20:26:08 +02:00
|
|
|
runtime_keys[n_runtime_keys].key_toastable =
|
|
|
|
TypeIsToastable(op_righttype);
|
2006-01-25 21:29:24 +01:00
|
|
|
n_runtime_keys++;
|
|
|
|
scanvalue = (Datum) 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* initialize the subsidiary scan key's fields appropriately
|
|
|
|
*/
|
|
|
|
ScanKeyEntryInitialize(this_sub_key,
|
|
|
|
flags,
|
|
|
|
varattno, /* attribute number */
|
|
|
|
op_strategy, /* op's strategy */
|
2006-12-23 01:43:13 +01:00
|
|
|
op_righttype, /* strategy subtype */
|
2011-03-26 23:28:40 +01:00
|
|
|
inputcollation, /* collation */
|
2006-01-25 21:29:24 +01:00
|
|
|
opfuncid, /* reg proc to use */
|
|
|
|
scanvalue); /* constant */
|
2010-12-03 02:50:48 +01:00
|
|
|
n_sub_key++;
|
2006-01-25 21:29:24 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Mark the last subsidiary scankey correctly */
|
2010-12-03 02:50:48 +01:00
|
|
|
first_sub_key[n_sub_key - 1].sk_flags |= SK_ROW_END;
|
2006-01-25 21:29:24 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We don't use ScanKeyEntryInitialize for the header because it
|
|
|
|
* isn't going to contain a valid sk_func pointer.
|
|
|
|
*/
|
|
|
|
MemSet(this_scan_key, 0, sizeof(ScanKeyData));
|
|
|
|
this_scan_key->sk_flags = SK_ROW_HEADER;
|
|
|
|
this_scan_key->sk_attno = first_sub_key->sk_attno;
|
|
|
|
this_scan_key->sk_strategy = rc->rctype;
|
2011-04-13 01:19:24 +02:00
|
|
|
/* sk_subtype, sk_collation, sk_func not used in a header */
|
2006-01-25 21:29:24 +01:00
|
|
|
this_scan_key->sk_argument = PointerGetDatum(first_sub_key);
|
|
|
|
}
|
2005-11-25 20:47:50 +01:00
|
|
|
else if (IsA(clause, ScalarArrayOpExpr))
|
2005-04-25 03:30:14 +02:00
|
|
|
{
|
2005-11-25 20:47:50 +01:00
|
|
|
/* indexkey op ANY (array-expression) */
|
|
|
|
ScalarArrayOpExpr *saop = (ScalarArrayOpExpr *) clause;
|
2011-10-16 21:39:24 +02:00
|
|
|
int flags = 0;
|
|
|
|
Datum scanvalue;
|
2005-11-25 20:47:50 +01:00
|
|
|
|
2010-12-03 02:50:48 +01:00
|
|
|
Assert(!isorderby);
|
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
Assert(saop->useOr);
|
2008-04-13 22:51:21 +02:00
|
|
|
opno = saop->opno;
|
2005-11-25 20:47:50 +01:00
|
|
|
opfuncid = saop->opfuncid;
|
|
|
|
|
2005-04-25 03:30:14 +02:00
|
|
|
/*
|
2005-11-25 20:47:50 +01:00
|
|
|
* leftop should be the index key Var, possibly relabeled
|
2005-04-25 03:30:14 +02:00
|
|
|
*/
|
2005-11-25 20:47:50 +01:00
|
|
|
leftop = (Expr *) linitial(saop->args);
|
2005-04-25 03:30:14 +02:00
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
if (leftop && IsA(leftop, RelabelType))
|
|
|
|
leftop = ((RelabelType *) leftop)->arg;
|
|
|
|
|
|
|
|
Assert(leftop != NULL);
|
|
|
|
|
|
|
|
if (!(IsA(leftop, Var) &&
|
2011-10-11 20:20:06 +02:00
|
|
|
((Var *) leftop)->varno == INDEX_VAR))
|
2005-11-25 20:47:50 +01:00
|
|
|
elog(ERROR, "indexqual doesn't have key on left side");
|
|
|
|
|
|
|
|
varattno = ((Var *) leftop)->varattno;
|
2018-04-07 22:00:39 +02:00
|
|
|
if (varattno < 1 || varattno > indnkeyatts)
|
2008-04-13 22:51:21 +02:00
|
|
|
elog(ERROR, "bogus index qualification");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We have to look up the operator's strategy number. This
|
|
|
|
* provides a cross-check that the operator does match the index.
|
|
|
|
*/
|
|
|
|
opfamily = index->rd_opfamily[varattno - 1];
|
|
|
|
|
2010-12-03 02:50:48 +01:00
|
|
|
get_op_opfamily_properties(opno, opfamily, isorderby,
|
2008-04-13 22:51:21 +02:00
|
|
|
&op_strategy,
|
|
|
|
&op_lefttype,
|
|
|
|
&op_righttype);
|
2005-11-25 20:47:50 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* rightop is the constant or variable array value
|
|
|
|
*/
|
|
|
|
rightop = (Expr *) lsecond(saop->args);
|
|
|
|
|
|
|
|
if (rightop && IsA(rightop, RelabelType))
|
|
|
|
rightop = ((RelabelType *) rightop)->arg;
|
|
|
|
|
|
|
|
Assert(rightop != NULL);
|
|
|
|
|
2019-01-22 02:36:55 +01:00
|
|
|
if (index->rd_indam->amsearcharray)
|
2011-10-16 21:39:24 +02:00
|
|
|
{
|
|
|
|
/* Index AM will handle this like a simple operator */
|
|
|
|
flags |= SK_SEARCHARRAY;
|
|
|
|
if (IsA(rightop, Const))
|
|
|
|
{
|
|
|
|
/* OK, simple constant comparison value */
|
|
|
|
scanvalue = ((Const *) rightop)->constvalue;
|
|
|
|
if (((Const *) rightop)->constisnull)
|
|
|
|
flags |= SK_ISNULL;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Need to treat this one as a runtime key */
|
|
|
|
if (n_runtime_keys >= max_runtime_keys)
|
|
|
|
{
|
|
|
|
if (max_runtime_keys == 0)
|
|
|
|
{
|
|
|
|
max_runtime_keys = 8;
|
|
|
|
runtime_keys = (IndexRuntimeKeyInfo *)
|
|
|
|
palloc(max_runtime_keys * sizeof(IndexRuntimeKeyInfo));
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
max_runtime_keys *= 2;
|
|
|
|
runtime_keys = (IndexRuntimeKeyInfo *)
|
|
|
|
repalloc(runtime_keys, max_runtime_keys * sizeof(IndexRuntimeKeyInfo));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
runtime_keys[n_runtime_keys].scan_key = this_scan_key;
|
|
|
|
runtime_keys[n_runtime_keys].key_expr =
|
|
|
|
ExecInitExpr(rightop, planstate);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Careful here: the runtime expression is not of
|
|
|
|
* op_righttype, but rather is an array of same; so
|
|
|
|
* TypeIsToastable() isn't helpful. However, we can
|
|
|
|
* assume that all array types are toastable.
|
|
|
|
*/
|
|
|
|
runtime_keys[n_runtime_keys].key_toastable = true;
|
|
|
|
n_runtime_keys++;
|
|
|
|
scanvalue = (Datum) 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Executor has to expand the array value */
|
|
|
|
array_keys[n_array_keys].scan_key = this_scan_key;
|
|
|
|
array_keys[n_array_keys].array_expr =
|
|
|
|
ExecInitExpr(rightop, planstate);
|
|
|
|
/* the remaining fields were zeroed by palloc0 */
|
|
|
|
n_array_keys++;
|
|
|
|
scanvalue = (Datum) 0;
|
|
|
|
}
|
2005-11-25 20:47:50 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* initialize the scan key's fields appropriately
|
|
|
|
*/
|
|
|
|
ScanKeyEntryInitialize(this_scan_key,
|
2011-10-16 21:39:24 +02:00
|
|
|
flags,
|
2005-11-25 20:47:50 +01:00
|
|
|
varattno, /* attribute number to scan */
|
2008-04-13 22:51:21 +02:00
|
|
|
op_strategy, /* op's strategy */
|
|
|
|
op_righttype, /* strategy subtype */
|
2011-03-26 23:28:40 +01:00
|
|
|
saop->inputcollid, /* collation */
|
2005-11-25 20:47:50 +01:00
|
|
|
opfuncid, /* reg proc to use */
|
2011-10-16 21:39:24 +02:00
|
|
|
scanvalue); /* constant */
|
2005-11-25 20:47:50 +01:00
|
|
|
}
|
2007-04-07 00:33:43 +02:00
|
|
|
else if (IsA(clause, NullTest))
|
|
|
|
{
|
2010-01-01 22:53:49 +01:00
|
|
|
/* indexkey IS NULL or indexkey IS NOT NULL */
|
|
|
|
NullTest *ntest = (NullTest *) clause;
|
|
|
|
int flags;
|
2007-04-07 00:33:43 +02:00
|
|
|
|
2010-12-03 02:50:48 +01:00
|
|
|
Assert(!isorderby);
|
|
|
|
|
2007-04-07 00:33:43 +02:00
|
|
|
/*
|
|
|
|
* argument should be the index key Var, possibly relabeled
|
|
|
|
*/
|
2010-01-01 22:53:49 +01:00
|
|
|
leftop = ntest->arg;
|
2007-04-07 00:33:43 +02:00
|
|
|
|
|
|
|
if (leftop && IsA(leftop, RelabelType))
|
|
|
|
leftop = ((RelabelType *) leftop)->arg;
|
|
|
|
|
|
|
|
Assert(leftop != NULL);
|
|
|
|
|
|
|
|
if (!(IsA(leftop, Var) &&
|
2011-10-11 20:20:06 +02:00
|
|
|
((Var *) leftop)->varno == INDEX_VAR))
|
2007-04-07 00:33:43 +02:00
|
|
|
elog(ERROR, "NullTest indexqual has wrong key");
|
|
|
|
|
|
|
|
varattno = ((Var *) leftop)->varattno;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* initialize the scan key's fields appropriately
|
|
|
|
*/
|
2010-01-01 22:53:49 +01:00
|
|
|
switch (ntest->nulltesttype)
|
|
|
|
{
|
|
|
|
case IS_NULL:
|
|
|
|
flags = SK_ISNULL | SK_SEARCHNULL;
|
|
|
|
break;
|
|
|
|
case IS_NOT_NULL:
|
|
|
|
flags = SK_ISNULL | SK_SEARCHNOTNULL;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
elog(ERROR, "unrecognized nulltesttype: %d",
|
|
|
|
(int) ntest->nulltesttype);
|
|
|
|
flags = 0; /* keep compiler quiet */
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2007-04-07 00:33:43 +02:00
|
|
|
ScanKeyEntryInitialize(this_scan_key,
|
2010-01-01 22:53:49 +01:00
|
|
|
flags,
|
2007-04-07 00:33:43 +02:00
|
|
|
varattno, /* attribute number to scan */
|
2008-04-13 22:51:21 +02:00
|
|
|
InvalidStrategy, /* no strategy */
|
|
|
|
InvalidOid, /* no strategy subtype */
|
2011-03-26 23:28:40 +01:00
|
|
|
InvalidOid, /* no collation */
|
2007-04-07 00:33:43 +02:00
|
|
|
InvalidOid, /* no reg proc for this */
|
|
|
|
(Datum) 0); /* constant */
|
|
|
|
}
|
2005-11-25 20:47:50 +01:00
|
|
|
else
|
|
|
|
elog(ERROR, "unsupported indexqual type: %d",
|
|
|
|
(int) nodeTag(clause));
|
2005-04-25 03:30:14 +02:00
|
|
|
}
|
|
|
|
|
2010-12-03 02:50:48 +01:00
|
|
|
Assert(n_runtime_keys <= max_runtime_keys);
|
|
|
|
|
2005-11-25 20:47:50 +01:00
|
|
|
/* Get rid of any unused arrays */
|
|
|
|
if (n_array_keys == 0)
|
2005-04-25 03:30:14 +02:00
|
|
|
{
|
2005-11-25 20:47:50 +01:00
|
|
|
pfree(array_keys);
|
|
|
|
array_keys = NULL;
|
2005-04-25 03:30:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2005-11-25 20:47:50 +01:00
|
|
|
* Return info to our caller.
|
2005-04-25 03:30:14 +02:00
|
|
|
*/
|
|
|
|
*scanKeys = scan_keys;
|
2005-11-25 20:47:50 +01:00
|
|
|
*numScanKeys = n_scan_keys;
|
|
|
|
*runtimeKeys = runtime_keys;
|
|
|
|
*numRuntimeKeys = n_runtime_keys;
|
|
|
|
if (arrayKeys)
|
|
|
|
{
|
|
|
|
*arrayKeys = array_keys;
|
|
|
|
*numArrayKeys = n_array_keys;
|
|
|
|
}
|
|
|
|
else if (n_array_keys != 0)
|
|
|
|
elog(ERROR, "ScalarArrayOpExpr index qual found where not allowed");
|
2003-08-22 22:26:43 +02:00
|
|
|
}
|
2017-02-15 19:53:24 +01:00
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* Parallel Scan Support
|
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* ExecIndexScanEstimate
|
|
|
|
*
|
2017-10-28 11:50:22 +02:00
|
|
|
* Compute the amount of space we'll need in the parallel
|
|
|
|
* query DSM, and inform pcxt->estimator about our needs.
|
2017-02-15 19:53:24 +01:00
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ExecIndexScanEstimate(IndexScanState *node,
|
|
|
|
ParallelContext *pcxt)
|
|
|
|
{
|
|
|
|
EState *estate = node->ss.ps.state;
|
|
|
|
|
|
|
|
node->iss_PscanLen = index_parallelscan_estimate(node->iss_RelationDesc,
|
Enhance nbtree ScalarArrayOp execution.
Commit 9e8da0f7 taught nbtree to handle ScalarArrayOpExpr quals
natively. This works by pushing down the full context (the array keys)
to the nbtree index AM, enabling it to execute multiple primitive index
scans that the planner treats as one continuous index scan/index path.
This earlier enhancement enabled nbtree ScalarArrayOp index-only scans.
It also allowed scans with ScalarArrayOp quals to return ordered results
(with some notable restrictions, described further down).
Take this general approach a lot further: teach nbtree SAOP index scans
to decide how to execute ScalarArrayOp scans (when and where to start
the next primitive index scan) based on physical index characteristics.
This can be far more efficient. All SAOP scans will now reliably avoid
duplicative leaf page accesses (just like any other nbtree index scan).
SAOP scans whose array keys are naturally clustered together now require
far fewer index descents, since we'll reliably avoid starting a new
primitive scan just to get to a later offset from the same leaf page.
The scan's arrays now advance using binary searches for the array
element that best matches the next tuple's attribute value. Required
scan key arrays (i.e. arrays from scan keys that can terminate the scan)
ratchet forward in lockstep with the index scan. Non-required arrays
(i.e. arrays from scan keys that can only exclude non-matching tuples)
"advance" without the process ever rolling over to a higher-order array.
Naturally, only required SAOP scan keys trigger skipping over leaf pages
(non-required arrays cannot safely end or start primitive index scans).
Consequently, even index scans of a composite index with a high-order
inequality scan key (which we'll mark required) and a low-order SAOP
scan key (which we won't mark required) now avoid repeating leaf page
accesses -- that benefit isn't limited to simpler equality-only cases.
In general, all nbtree index scans now output tuples as if they were one
continuous index scan -- even scans that mix a high-order inequality
with lower-order SAOP equalities reliably output tuples in index order.
This allows us to remove a couple of special cases that were applied
when building index paths with SAOP clauses during planning.
Bugfix commit 807a40c5 taught the planner to avoid generating unsafe
path keys: path keys on a multicolumn index path, with a SAOP clause on
any attribute beyond the first/most significant attribute. These cases
are now all safe, so we go back to generating path keys without regard
for the presence of SAOP clauses (just like with any other clause type).
Affected queries can now exploit scan output order in all the usual ways
(e.g., certain "ORDER BY ... LIMIT n" queries can now terminate early).
Also undo changes from follow-up bugfix commit a4523c5a, which taught
the planner to produce alternative index paths, with path keys, but
without low-order SAOP index quals (filter quals were used instead).
We'll no longer generate these alternative paths, since they can no
longer offer any meaningful advantages over standard index qual paths.
Affected queries thereby avoid all of the disadvantages that come from
using filter quals within index scan nodes. They can avoid extra heap
page accesses from using filter quals to exclude non-matching tuples
(index quals will never have that problem). They can also skip over
irrelevant sections of the index in more cases (though only when nbtree
determines that starting another primitive scan actually makes sense).
There is a theoretical risk that removing restrictions on SAOP index
paths from the planner will break compatibility with amcanorder-based
index AMs maintained as extensions. Such an index AM could have the
same limitations around ordered SAOP scans as nbtree had up until now.
Adding a pro forma incompatibility item about the issue to the Postgres
17 release notes seems like a good idea.
Author: Peter Geoghegan <pg@bowt.ie>
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reviewed-By: Tomas Vondra <tomas.vondra@enterprisedb.com>
Discussion: https://postgr.es/m/CAH2-Wz=ksvN_sjcnD1+Bt-WtifRA5ok48aDYnq3pkKhxgMQpcw@mail.gmail.com
2024-04-06 17:47:10 +02:00
|
|
|
node->iss_NumScanKeys,
|
|
|
|
node->iss_NumOrderByKeys,
|
2017-02-15 19:53:24 +01:00
|
|
|
estate->es_snapshot);
|
|
|
|
shm_toc_estimate_chunk(&pcxt->estimator, node->iss_PscanLen);
|
|
|
|
shm_toc_estimate_keys(&pcxt->estimator, 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* ExecIndexScanInitializeDSM
|
|
|
|
*
|
|
|
|
* Set up a parallel index scan descriptor.
|
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ExecIndexScanInitializeDSM(IndexScanState *node,
|
|
|
|
ParallelContext *pcxt)
|
|
|
|
{
|
|
|
|
EState *estate = node->ss.ps.state;
|
|
|
|
ParallelIndexScanDesc piscan;
|
|
|
|
|
|
|
|
piscan = shm_toc_allocate(pcxt->toc, node->iss_PscanLen);
|
|
|
|
index_parallelscan_initialize(node->ss.ss_currentRelation,
|
|
|
|
node->iss_RelationDesc,
|
|
|
|
estate->es_snapshot,
|
|
|
|
piscan);
|
|
|
|
shm_toc_insert(pcxt->toc, node->ss.ps.plan->plan_node_id, piscan);
|
|
|
|
node->iss_ScanDesc =
|
|
|
|
index_beginscan_parallel(node->ss.ss_currentRelation,
|
|
|
|
node->iss_RelationDesc,
|
|
|
|
node->iss_NumScanKeys,
|
|
|
|
node->iss_NumOrderByKeys,
|
|
|
|
piscan);
|
|
|
|
|
|
|
|
/*
|
2017-03-08 14:15:24 +01:00
|
|
|
* If no run-time keys to calculate or they are ready, go ahead and pass
|
|
|
|
* the scankeys to the index AM.
|
2017-02-15 19:53:24 +01:00
|
|
|
*/
|
2017-03-08 14:15:24 +01:00
|
|
|
if (node->iss_NumRuntimeKeys == 0 || node->iss_RuntimeKeysReady)
|
2017-02-15 19:53:24 +01:00
|
|
|
index_rescan(node->iss_ScanDesc,
|
|
|
|
node->iss_ScanKeys, node->iss_NumScanKeys,
|
|
|
|
node->iss_OrderByKeys, node->iss_NumOrderByKeys);
|
|
|
|
}
|
|
|
|
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* ExecIndexScanReInitializeDSM
|
|
|
|
*
|
|
|
|
* Reset shared state before beginning a fresh scan.
|
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ExecIndexScanReInitializeDSM(IndexScanState *node,
|
|
|
|
ParallelContext *pcxt)
|
|
|
|
{
|
|
|
|
index_parallelrescan(node->iss_ScanDesc);
|
|
|
|
}
|
|
|
|
|
2017-02-15 19:53:24 +01:00
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* ExecIndexScanInitializeWorker
|
|
|
|
*
|
|
|
|
* Copy relevant information from TOC into planstate.
|
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
void
|
2017-11-17 02:28:11 +01:00
|
|
|
ExecIndexScanInitializeWorker(IndexScanState *node,
|
|
|
|
ParallelWorkerContext *pwcxt)
|
2017-02-15 19:53:24 +01:00
|
|
|
{
|
|
|
|
ParallelIndexScanDesc piscan;
|
|
|
|
|
2017-11-17 02:28:11 +01:00
|
|
|
piscan = shm_toc_lookup(pwcxt->toc, node->ss.ps.plan->plan_node_id, false);
|
2017-02-15 19:53:24 +01:00
|
|
|
node->iss_ScanDesc =
|
|
|
|
index_beginscan_parallel(node->ss.ss_currentRelation,
|
|
|
|
node->iss_RelationDesc,
|
|
|
|
node->iss_NumScanKeys,
|
|
|
|
node->iss_NumOrderByKeys,
|
|
|
|
piscan);
|
|
|
|
|
|
|
|
/*
|
2017-03-08 14:15:24 +01:00
|
|
|
* If no run-time keys to calculate or they are ready, go ahead and pass
|
|
|
|
* the scankeys to the index AM.
|
2017-02-15 19:53:24 +01:00
|
|
|
*/
|
2017-03-08 14:15:24 +01:00
|
|
|
if (node->iss_NumRuntimeKeys == 0 || node->iss_RuntimeKeysReady)
|
2017-02-15 19:53:24 +01:00
|
|
|
index_rescan(node->iss_ScanDesc,
|
|
|
|
node->iss_ScanKeys, node->iss_NumScanKeys,
|
|
|
|
node->iss_OrderByKeys, node->iss_NumOrderByKeys);
|
|
|
|
}
|