1996-07-09 08:22:35 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
1999-02-14 00:22:53 +01:00
|
|
|
* execdebug.h
|
1996-07-09 08:22:35 +02:00
|
|
|
* #defines governing debugging behaviour in the executor
|
|
|
|
*
|
2006-05-23 17:21:52 +02:00
|
|
|
* XXX this is all pretty old and crufty. Newer code tends to use elog()
|
|
|
|
* for debug printouts, because that's more flexible than printf().
|
|
|
|
*
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
2022-01-08 01:04:57 +01:00
|
|
|
* Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
|
2000-01-26 06:58:53 +01:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/include/executor/execdebug.h
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#ifndef EXECDEBUG_H
|
|
|
|
#define EXECDEBUG_H
|
|
|
|
|
2000-06-15 02:52:26 +02:00
|
|
|
#include "executor/executor.h"
|
1999-07-16 01:04:24 +02:00
|
|
|
#include "nodes/print.h"
|
1999-07-15 17:21:54 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* debugging defines.
|
|
|
|
*
|
|
|
|
* If you want certain debugging behaviour, then #define
|
1999-02-23 08:39:40 +01:00
|
|
|
* the variable to 1. No need to explicitly #undef by default,
|
|
|
|
* since we can use -D compiler options to enable features.
|
|
|
|
* - thomas 1999-02-20
|
1996-07-09 08:22:35 +02:00
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* ----------------
|
|
|
|
* EXEC_NESTLOOPDEBUG is a flag which turns on debugging of the
|
2006-05-23 17:21:52 +02:00
|
|
|
* nest loop node by NL_printf() and ENL_printf() in nodeNestloop.c
|
1996-07-09 08:22:35 +02:00
|
|
|
* ----------------
|
|
|
|
#undef EXEC_NESTLOOPDEBUG
|
1999-02-23 08:39:40 +01:00
|
|
|
*/
|
1996-07-09 08:22:35 +02:00
|
|
|
|
|
|
|
/* ----------------
|
|
|
|
* EXEC_SORTDEBUG is a flag which turns on debugging of
|
2006-05-23 17:21:52 +02:00
|
|
|
* the ExecSort() stuff by SO_printf() in nodeSort.c
|
1996-07-09 08:22:35 +02:00
|
|
|
* ----------------
|
|
|
|
#undef EXEC_SORTDEBUG
|
1999-02-23 08:39:40 +01:00
|
|
|
*/
|
1996-07-09 08:22:35 +02:00
|
|
|
|
|
|
|
/* ----------------
|
|
|
|
* EXEC_MERGEJOINDEBUG is a flag which turns on debugging of
|
2006-05-23 17:21:52 +02:00
|
|
|
* the ExecMergeJoin() stuff by MJ_printf() in nodeMergejoin.c
|
1996-07-09 08:22:35 +02:00
|
|
|
* ----------------
|
|
|
|
#undef EXEC_MERGEJOINDEBUG
|
1999-02-23 08:39:40 +01:00
|
|
|
*/
|
1996-07-09 08:22:35 +02:00
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* #defines controlled by above definitions
|
|
|
|
*
|
|
|
|
* Note: most of these are "incomplete" because I didn't
|
|
|
|
* need the ones not defined. More should be added
|
|
|
|
* only as necessary -cim 10/26/89
|
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
2000-09-12 23:07:18 +02:00
|
|
|
#define T_OR_F(b) ((b) ? "true" : "false")
|
1996-07-09 08:22:35 +02:00
|
|
|
#define NULL_OR_TUPLE(slot) (TupIsNull(slot) ? "null" : "a tuple")
|
|
|
|
|
|
|
|
/* ----------------
|
|
|
|
* nest loop debugging defines
|
|
|
|
* ----------------
|
|
|
|
*/
|
|
|
|
#ifdef EXEC_NESTLOOPDEBUG
|
1999-02-23 08:39:40 +01:00
|
|
|
#define NL_nodeDisplay(l) nodeDisplay(l)
|
1996-07-09 08:22:35 +02:00
|
|
|
#define NL_printf(s) printf(s)
|
|
|
|
#define NL1_printf(s, a) printf(s, a)
|
|
|
|
#define ENL1_printf(message) printf("ExecNestLoop: %s\n", message)
|
|
|
|
#else
|
|
|
|
#define NL_nodeDisplay(l)
|
|
|
|
#define NL_printf(s)
|
|
|
|
#define NL1_printf(s, a)
|
|
|
|
#define ENL1_printf(message)
|
|
|
|
#endif /* EXEC_NESTLOOPDEBUG */
|
|
|
|
|
|
|
|
/* ----------------
|
|
|
|
* sort node debugging defines
|
|
|
|
* ----------------
|
|
|
|
*/
|
|
|
|
#ifdef EXEC_SORTDEBUG
|
1999-02-23 08:39:40 +01:00
|
|
|
#define SO_nodeDisplay(l) nodeDisplay(l)
|
1996-07-09 08:22:35 +02:00
|
|
|
#define SO_printf(s) printf(s)
|
|
|
|
#define SO1_printf(s, p) printf(s, p)
|
Implement Incremental Sort
Incremental Sort is an optimized variant of multikey sort for cases when
the input is already sorted by a prefix of the requested sort keys. For
example when the relation is already sorted by (key1, key2) and we need
to sort it by (key1, key2, key3) we can simply split the input rows into
groups having equal values in (key1, key2), and only sort/compare the
remaining column key3.
This has a number of benefits:
- Reduced memory consumption, because only a single group (determined by
values in the sorted prefix) needs to be kept in memory. This may also
eliminate the need to spill to disk.
- Lower startup cost, because Incremental Sort produce results after each
prefix group, which is beneficial for plans where startup cost matters
(like for example queries with LIMIT clause).
We consider both Sort and Incremental Sort, and decide based on costing.
The implemented algorithm operates in two different modes:
- Fetching a minimum number of tuples without check of equality on the
prefix keys, and sorting on all columns when safe.
- Fetching all tuples for a single prefix group and then sorting by
comparing only the remaining (non-prefix) keys.
We always start in the first mode, and employ a heuristic to switch into
the second mode if we believe it's beneficial - the goal is to minimize
the number of unnecessary comparions while keeping memory consumption
below work_mem.
This is a very old patch series. The idea was originally proposed by
Alexander Korotkov back in 2013, and then revived in 2017. In 2018 the
patch was taken over by James Coleman, who wrote and rewrote most of the
current code.
There were many reviewers/contributors since 2013 - I've done my best to
pick the most active ones, and listed them in this commit message.
Author: James Coleman, Alexander Korotkov
Reviewed-by: Tomas Vondra, Andreas Karlsson, Marti Raudsepp, Peter Geoghegan, Robert Haas, Thomas Munro, Antonin Houska, Andres Freund, Alexander Kuzmenkov
Discussion: https://postgr.es/m/CAPpHfdscOX5an71nHd8WSUH6GNOCf=V7wgDaTXdDd9=goN-gfA@mail.gmail.com
Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
2020-04-06 21:33:28 +02:00
|
|
|
#define SO2_printf(s, p1, p2) printf(s, p1, p2)
|
1996-07-09 08:22:35 +02:00
|
|
|
#else
|
|
|
|
#define SO_nodeDisplay(l)
|
|
|
|
#define SO_printf(s)
|
|
|
|
#define SO1_printf(s, p)
|
Implement Incremental Sort
Incremental Sort is an optimized variant of multikey sort for cases when
the input is already sorted by a prefix of the requested sort keys. For
example when the relation is already sorted by (key1, key2) and we need
to sort it by (key1, key2, key3) we can simply split the input rows into
groups having equal values in (key1, key2), and only sort/compare the
remaining column key3.
This has a number of benefits:
- Reduced memory consumption, because only a single group (determined by
values in the sorted prefix) needs to be kept in memory. This may also
eliminate the need to spill to disk.
- Lower startup cost, because Incremental Sort produce results after each
prefix group, which is beneficial for plans where startup cost matters
(like for example queries with LIMIT clause).
We consider both Sort and Incremental Sort, and decide based on costing.
The implemented algorithm operates in two different modes:
- Fetching a minimum number of tuples without check of equality on the
prefix keys, and sorting on all columns when safe.
- Fetching all tuples for a single prefix group and then sorting by
comparing only the remaining (non-prefix) keys.
We always start in the first mode, and employ a heuristic to switch into
the second mode if we believe it's beneficial - the goal is to minimize
the number of unnecessary comparions while keeping memory consumption
below work_mem.
This is a very old patch series. The idea was originally proposed by
Alexander Korotkov back in 2013, and then revived in 2017. In 2018 the
patch was taken over by James Coleman, who wrote and rewrote most of the
current code.
There were many reviewers/contributors since 2013 - I've done my best to
pick the most active ones, and listed them in this commit message.
Author: James Coleman, Alexander Korotkov
Reviewed-by: Tomas Vondra, Andreas Karlsson, Marti Raudsepp, Peter Geoghegan, Robert Haas, Thomas Munro, Antonin Houska, Andres Freund, Alexander Kuzmenkov
Discussion: https://postgr.es/m/CAPpHfdscOX5an71nHd8WSUH6GNOCf=V7wgDaTXdDd9=goN-gfA@mail.gmail.com
Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
2020-04-06 21:33:28 +02:00
|
|
|
#define SO2_printf(s, p1, p2)
|
1996-07-09 08:22:35 +02:00
|
|
|
#endif /* EXEC_SORTDEBUG */
|
|
|
|
|
|
|
|
/* ----------------
|
|
|
|
* merge join debugging defines
|
|
|
|
* ----------------
|
|
|
|
*/
|
|
|
|
#ifdef EXEC_MERGEJOINDEBUG
|
1999-02-23 08:39:40 +01:00
|
|
|
|
|
|
|
#define MJ_nodeDisplay(l) nodeDisplay(l)
|
1996-07-09 08:22:35 +02:00
|
|
|
#define MJ_printf(s) printf(s)
|
|
|
|
#define MJ1_printf(s, p) printf(s, p)
|
|
|
|
#define MJ2_printf(s, p1, p2) printf(s, p1, p2)
|
2005-03-16 22:38:10 +01:00
|
|
|
#define MJ_debugtup(slot) debugtup(slot, NULL)
|
2000-09-12 23:07:18 +02:00
|
|
|
#define MJ_dump(state) ExecMergeTupleDump(state)
|
2005-05-13 23:20:16 +02:00
|
|
|
#define MJ_DEBUG_COMPARE(res) \
|
|
|
|
MJ1_printf(" MJCompare() returns %d\n", (res))
|
1996-07-09 08:22:35 +02:00
|
|
|
#define MJ_DEBUG_QUAL(clause, res) \
|
|
|
|
MJ2_printf(" ExecQual(%s, econtext) returns %s\n", \
|
2005-05-13 23:20:16 +02:00
|
|
|
CppAsString(clause), T_OR_F(res))
|
1996-07-09 08:22:35 +02:00
|
|
|
#define MJ_DEBUG_PROC_NODE(slot) \
|
2000-09-12 23:07:18 +02:00
|
|
|
MJ2_printf(" %s = ExecProcNode(...) returns %s\n", \
|
2005-05-13 23:20:16 +02:00
|
|
|
CppAsString(slot), NULL_OR_TUPLE(slot))
|
1996-07-09 08:22:35 +02:00
|
|
|
#else
|
2000-09-12 23:07:18 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
#define MJ_nodeDisplay(l)
|
|
|
|
#define MJ_printf(s)
|
|
|
|
#define MJ1_printf(s, p)
|
|
|
|
#define MJ2_printf(s, p1, p2)
|
2005-03-16 22:38:10 +01:00
|
|
|
#define MJ_debugtup(slot)
|
2000-09-12 23:07:18 +02:00
|
|
|
#define MJ_dump(state)
|
2005-05-13 23:20:16 +02:00
|
|
|
#define MJ_DEBUG_COMPARE(res)
|
1996-07-09 08:22:35 +02:00
|
|
|
#define MJ_DEBUG_QUAL(clause, res)
|
|
|
|
#define MJ_DEBUG_PROC_NODE(slot)
|
|
|
|
#endif /* EXEC_MERGEJOINDEBUG */
|
|
|
|
|
Faster expression evaluation and targetlist projection.
This replaces the old, recursive tree-walk based evaluation, with
non-recursive, opcode dispatch based, expression evaluation.
Projection is now implemented as part of expression evaluation.
This both leads to significant performance improvements, and makes
future just-in-time compilation of expressions easier.
The speed gains primarily come from:
- non-recursive implementation reduces stack usage / overhead
- simple sub-expressions are implemented with a single jump, without
function calls
- sharing some state between different sub-expressions
- reduced amount of indirect/hard to predict memory accesses by laying
out operation metadata sequentially; including the avoidance of
nearly all of the previously used linked lists
- more code has been moved to expression initialization, avoiding
constant re-checks at evaluation time
Future just-in-time compilation (JIT) has become easier, as
demonstrated by released patches intended to be merged in a later
release, for primarily two reasons: Firstly, due to a stricter split
between expression initialization and evaluation, less code has to be
handled by the JIT. Secondly, due to the non-recursive nature of the
generated "instructions", less performance-critical code-paths can
easily be shared between interpreted and compiled evaluation.
The new framework allows for significant future optimizations. E.g.:
- basic infrastructure for to later reduce the per executor-startup
overhead of expression evaluation, by caching state in prepared
statements. That'd be helpful in OLTPish scenarios where
initialization overhead is measurable.
- optimizing the generated "code". A number of proposals for potential
work has already been made.
- optimizing the interpreter. Similarly a number of proposals have
been made here too.
The move of logic into the expression initialization step leads to some
backward-incompatible changes:
- Function permission checks are now done during expression
initialization, whereas previously they were done during
execution. In edge cases this can lead to errors being raised that
previously wouldn't have been, e.g. a NULL array being coerced to a
different array type previously didn't perform checks.
- The set of domain constraints to be checked, is now evaluated once
during expression initialization, previously it was re-built
every time a domain check was evaluated. For normal queries this
doesn't change much, but e.g. for plpgsql functions, which caches
ExprStates, the old set could stick around longer. The behavior
around might still change.
Author: Andres Freund, with significant changes by Tom Lane,
changes by Heikki Linnakangas
Reviewed-By: Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/20161206034955.bh33paeralxbtluv@alap3.anarazel.de
2017-03-14 23:45:36 +01:00
|
|
|
#endif /* EXECDEBUG_H */
|