2003-04-07 00:45:23 +02:00
|
|
|
/*------------------------------------------------------------------------
|
|
|
|
* PostgreSQL manual configuration settings
|
|
|
|
*
|
|
|
|
* This file contains various configuration symbols and limits. In
|
|
|
|
* all cases, changing them is only useful in very rare situations or
|
|
|
|
* for developers. If you edit any of these, be sure to do a *full*
|
|
|
|
* rebuild (and an initdb if noted).
|
|
|
|
*
|
2023-01-02 21:00:37 +01:00
|
|
|
* Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
|
2012-01-02 04:39:59 +01:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/include/pg_config_manual.h
|
2003-04-07 00:45:23 +02:00
|
|
|
*------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
|
Make WAL segment size configurable at initdb time.
For performance reasons a larger segment size than the default 16MB
can be useful. A larger segment size has two main benefits: Firstly,
in setups using archiving, it makes it easier to write scripts that
can keep up with higher amounts of WAL, secondly, the WAL has to be
written and synced to disk less frequently.
But at the same time large segment size are disadvantageous for
smaller databases. So far the segment size had to be configured at
compile time, often making it unrealistic to choose one fitting to a
particularly load. Therefore change it to a initdb time setting.
This includes a breaking changes to the xlogreader.h API, which now
requires the current segment size to be configured. For that and
similar reasons a number of binaries had to be taught how to recognize
the current segment size.
Author: Beena Emerson, editorialized by Andres Freund
Reviewed-By: Andres Freund, David Steele, Kuntal Ghosh, Michael
Paquier, Peter Eisentraut, Robert Hass, Tushar Ahuja
Discussion: https://postgr.es/m/CAOG9ApEAcQ--1ieKbhFzXSQPw_YLmepaa4hNdnY5+ZULpt81Mw@mail.gmail.com
2017-09-20 07:03:48 +02:00
|
|
|
/*
|
2018-12-29 00:24:11 +01:00
|
|
|
* This is the default value for wal_segment_size to be used when initdb is run
|
|
|
|
* without the --wal-segsize option. It must be a valid segment size.
|
Make WAL segment size configurable at initdb time.
For performance reasons a larger segment size than the default 16MB
can be useful. A larger segment size has two main benefits: Firstly,
in setups using archiving, it makes it easier to write scripts that
can keep up with higher amounts of WAL, secondly, the WAL has to be
written and synced to disk less frequently.
But at the same time large segment size are disadvantageous for
smaller databases. So far the segment size had to be configured at
compile time, often making it unrealistic to choose one fitting to a
particularly load. Therefore change it to a initdb time setting.
This includes a breaking changes to the xlogreader.h API, which now
requires the current segment size to be configured. For that and
similar reasons a number of binaries had to be taught how to recognize
the current segment size.
Author: Beena Emerson, editorialized by Andres Freund
Reviewed-By: Andres Freund, David Steele, Kuntal Ghosh, Michael
Paquier, Peter Eisentraut, Robert Hass, Tushar Ahuja
Discussion: https://postgr.es/m/CAOG9ApEAcQ--1ieKbhFzXSQPw_YLmepaa4hNdnY5+ZULpt81Mw@mail.gmail.com
2017-09-20 07:03:48 +02:00
|
|
|
*/
|
|
|
|
#define DEFAULT_XLOG_SEG_SIZE (16*1024*1024)
|
|
|
|
|
2007-02-06 10:16:08 +01:00
|
|
|
/*
|
|
|
|
* Maximum length for identifiers (e.g. table names, column names,
|
2021-02-24 08:13:17 +01:00
|
|
|
* function names). Names actually are limited to one fewer byte than this,
|
Reduce the alignment requirement of type "name" from int to char, and arrange
to suppress zero-padding of "name" entries in indexes.
The alignment change is unlikely to save any space, but it is really needed
anyway to make the world safe for our widespread practice of passing plain
old C strings to functions that are declared as taking Name. In the previous
coding, the C compiler was entitled to assume that a Name pointer was
word-aligned; but we were failing to guarantee that. I think the reason
we'd not seen failures is that usually the only thing that gets done with
such a pointer is strcmp(), which is hard to optimize in a way that exploits
word-alignment. Still, some enterprising compiler guy will probably think
of a way eventually, or we might change our code in a way that exposes
more-obvious optimization opportunities.
The padding change is accomplished in one-liner fashion by declaring the
"name" index opclasses to use storage type "cstring" in pg_opclass.h.
Normally btree and hash don't allow a nondefault storage type, because they
don't have any provisions for converting the input datum to another type.
However, because name and cstring are effectively the same thing except for
padding, no conversion is needed --- we only need index_form_tuple() to treat
the datum as being cstring not name, and this is sufficient. This seems to
make for about a one-third reduction in the typical sizes of system catalog
indexes that involve "name" columns, of which we have many.
These two changes are only weakly related, but the alignment change makes
me feel safer that the padding change won't introduce problems, so I'm
committing them together.
2008-06-24 19:58:27 +02:00
|
|
|
* because the length must include a trailing zero byte.
|
2007-02-06 10:16:08 +01:00
|
|
|
*
|
|
|
|
* Changing this requires an initdb.
|
|
|
|
*/
|
|
|
|
#define NAMEDATALEN 64
|
|
|
|
|
2003-04-07 00:45:23 +02:00
|
|
|
/*
|
2005-03-29 05:01:32 +02:00
|
|
|
* Maximum number of arguments to a function.
|
2003-04-07 00:45:23 +02:00
|
|
|
*
|
2011-12-25 01:03:21 +01:00
|
|
|
* The minimum value is 8 (GIN indexes use 8-argument support functions).
|
2005-03-29 05:01:32 +02:00
|
|
|
* The maximum possible value is around 600 (limited by index tuple size in
|
|
|
|
* pg_proc's index; BLCKSZ larger than 8K would allow more). Values larger
|
|
|
|
* than needed will waste memory and processing time, but do not directly
|
|
|
|
* cost disk space.
|
2003-04-07 00:45:23 +02:00
|
|
|
*
|
2005-03-29 05:01:32 +02:00
|
|
|
* Changing this does not require an initdb, but it does require a full
|
|
|
|
* backend recompile (including any user-defined C functions).
|
|
|
|
*/
|
|
|
|
#define FUNC_MAX_ARGS 100
|
|
|
|
|
2021-11-22 07:40:17 +01:00
|
|
|
/*
|
|
|
|
* When creating a product derived from PostgreSQL with changes that cause
|
|
|
|
* incompatibilities for loadable modules, it is recommended to change this
|
|
|
|
* string so that dfmgr.c can refuse to load incompatible modules with a clean
|
|
|
|
* error message. Typical examples that cause incompatibilities are any
|
|
|
|
* changes to node tags or node structures. (Note that dfmgr.c already
|
|
|
|
* detects common sources of incompatibilities due to major version
|
|
|
|
* differences and due to some changed compile-time constants. This setting
|
|
|
|
* is for catching anything that cannot be detected in a straightforward way.)
|
|
|
|
*
|
|
|
|
* There is no prescribed format for the string. The suggestion is to include
|
|
|
|
* product or company name, and optionally any internally-relevant ABI
|
|
|
|
* version. Example: "ACME Postgres/1.2". Note that the string will appear
|
|
|
|
* in a user-facing error message if an ABI mismatch is detected.
|
|
|
|
*/
|
|
|
|
#define FMGR_ABI_EXTRA "PostgreSQL"
|
|
|
|
|
2005-03-29 05:01:32 +02:00
|
|
|
/*
|
|
|
|
* Maximum number of columns in an index. There is little point in making
|
|
|
|
* this anything but a multiple of 32, because the main cost is associated
|
|
|
|
* with index tuple header size (see access/itup.h).
|
|
|
|
*
|
|
|
|
* Changing this requires an initdb.
|
2003-04-07 00:45:23 +02:00
|
|
|
*/
|
|
|
|
#define INDEX_MAX_KEYS 32
|
|
|
|
|
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
|
|
|
/*
|
|
|
|
* Maximum number of columns in a partition key
|
|
|
|
*/
|
|
|
|
#define PARTITION_MAX_KEYS 32
|
|
|
|
|
2019-11-27 11:21:02 +01:00
|
|
|
/*
|
|
|
|
* Decide whether built-in 8-byte types, including float8, int8, and
|
|
|
|
* timestamp, are passed by value. This is on by default if sizeof(Datum) >=
|
|
|
|
* 8 (that is, on 64-bit platforms). If sizeof(Datum) < 8 (32-bit platforms),
|
|
|
|
* this must be off. We keep this here as an option so that it is easy to
|
|
|
|
* test the pass-by-reference code paths on 64-bit platforms.
|
|
|
|
*
|
|
|
|
* Changing this requires an initdb.
|
|
|
|
*/
|
|
|
|
#if SIZEOF_VOID_P >= 8
|
|
|
|
#define USE_FLOAT8_BYVAL 1
|
|
|
|
#endif
|
|
|
|
|
Reduce the number of semaphores used under --disable-spinlocks.
Instead of allocating a semaphore from the operating system for every
spinlock, allocate a fixed number of semaphores (by default, 1024)
from the operating system and multiplex all the spinlocks that get
created onto them. This could self-deadlock if a process attempted
to acquire more than one spinlock at a time, but since processes
aren't supposed to execute anything other than short stretches of
straight-line code while holding a spinlock, that shouldn't happen.
One motivation for this change is that, with the introduction of
dynamic shared memory, it may be desirable to create spinlocks that
last for less than the lifetime of the server. Without this change,
attempting to use such facilities under --disable-spinlocks would
quickly exhaust any supply of available semaphores. Quite apart
from that, it's desirable to contain the quantity of semaphores
needed to run the server simply on convenience grounds, since using
too many may make it harder to get PostgreSQL running on a new
platform, which is mostly the point of --disable-spinlocks in the
first place.
Patch by me; review by Tom Lane.
2014-01-09 00:49:14 +01:00
|
|
|
/*
|
|
|
|
* When we don't have native spinlocks, we use semaphores to simulate them.
|
|
|
|
* Decreasing this value reduces consumption of OS resources; increasing it
|
|
|
|
* may improve performance, but supplying a real spinlock implementation is
|
|
|
|
* probably far better.
|
|
|
|
*/
|
2016-04-18 19:33:06 +02:00
|
|
|
#define NUM_SPINLOCK_SEMAPHORES 128
|
Reduce the number of semaphores used under --disable-spinlocks.
Instead of allocating a semaphore from the operating system for every
spinlock, allocate a fixed number of semaphores (by default, 1024)
from the operating system and multiplex all the spinlocks that get
created onto them. This could self-deadlock if a process attempted
to acquire more than one spinlock at a time, but since processes
aren't supposed to execute anything other than short stretches of
straight-line code while holding a spinlock, that shouldn't happen.
One motivation for this change is that, with the introduction of
dynamic shared memory, it may be desirable to create spinlocks that
last for less than the lifetime of the server. Without this change,
attempting to use such facilities under --disable-spinlocks would
quickly exhaust any supply of available semaphores. Quite apart
from that, it's desirable to contain the quantity of semaphores
needed to run the server simply on convenience grounds, since using
too many may make it harder to get PostgreSQL running on a new
platform, which is mostly the point of --disable-spinlocks in the
first place.
Patch by me; review by Tom Lane.
2014-01-09 00:49:14 +01:00
|
|
|
|
Add a basic atomic ops API abstracting away platform/architecture details.
Several upcoming performance/scalability improvements require atomic
operations. This new API avoids the need to splatter compiler and
architecture dependent code over all the locations employing atomic
ops.
For several of the potential usages it'd be problematic to maintain
both, a atomics using implementation and one using spinlocks or
similar. In all likelihood one of the implementations would not get
tested regularly under concurrency. To avoid that scenario the new API
provides a automatic fallback of atomic operations to spinlocks. All
properties of atomic operations are maintained. This fallback -
obviously - isn't as fast as just using atomic ops, but it's not bad
either. For one of the future users the atomics ontop spinlocks
implementation was actually slightly faster than the old purely
spinlock using implementation. That's important because it reduces the
fear of regressing older platforms when improving the scalability for
new ones.
The API, loosely modeled after the C11 atomics support, currently
provides 'atomic flags' and 32 bit unsigned integers. If the platform
efficiently supports atomic 64 bit unsigned integers those are also
provided.
To implement atomics support for a platform/architecture/compiler for
a type of atomics 32bit compare and exchange needs to be
implemented. If available and more efficient native support for flags,
32 bit atomic addition, and corresponding 64 bit operations may also
be provided. Additional useful atomic operations are implemented
generically ontop of these.
The implementation for various versions of gcc, msvc and sun studio have
been tested. Additional existing stub implementations for
* Intel icc
* HUPX acc
* IBM xlc
are included but have never been tested. These will likely require
fixes based on buildfarm and user feedback.
As atomic operations also require barriers for some operations the
existing barrier support has been moved into the atomics code.
Author: Andres Freund with contributions from Oskari Saarenmaa
Reviewed-By: Amit Kapila, Robert Haas, Heikki Linnakangas and Álvaro Herrera
Discussion: CA+TgmoYBW+ux5-8Ja=Mcyuy8=VXAnVRHp3Kess6Pn3DMXAPAEA@mail.gmail.com,
20131015123303.GH5300@awork2.anarazel.de,
20131028205522.GI20248@awork2.anarazel.de
2014-09-25 23:49:05 +02:00
|
|
|
/*
|
|
|
|
* When we have neither spinlocks nor atomic operations support we're
|
|
|
|
* implementing atomic operations on top of spinlock on top of semaphores. To
|
|
|
|
* be safe against atomic operations while holding a spinlock separate
|
|
|
|
* semaphores have to be used.
|
|
|
|
*/
|
|
|
|
#define NUM_ATOMICS_SEMAPHORES 64
|
|
|
|
|
2003-04-07 00:45:23 +02:00
|
|
|
/*
|
|
|
|
* MAXPGPATH: standard size of a pathname buffer in PostgreSQL (hence,
|
|
|
|
* maximum usable pathname length is one less).
|
|
|
|
*
|
|
|
|
* We'd use a standard system header symbol for this, if there weren't
|
|
|
|
* so many to choose from: MAXPATHLEN, MAX_PATH, PATH_MAX are all
|
|
|
|
* defined by different "standards", and often have different values
|
|
|
|
* on the same platform! So we just punt and use a reasonably
|
|
|
|
* generous setting here.
|
|
|
|
*/
|
|
|
|
#define MAXPGPATH 1024
|
|
|
|
|
|
|
|
/*
|
|
|
|
* You can try changing this if you have a machine with bytes of
|
|
|
|
* another size, but no guarantee...
|
|
|
|
*/
|
|
|
|
#define BITS_PER_BYTE 8
|
|
|
|
|
2003-09-21 19:57:21 +02:00
|
|
|
/*
|
|
|
|
* Preferred alignment for disk I/O buffers. On some CPUs, copies between
|
|
|
|
* user space and kernel space are significantly faster if the user buffer
|
|
|
|
* is aligned on a larger-than-MAXALIGN boundary. Ideally this should be
|
|
|
|
* a platform-dependent value, but for now we just hard-wire it.
|
|
|
|
*/
|
|
|
|
#define ALIGNOF_BUFFER 32
|
|
|
|
|
2020-03-25 14:23:25 +01:00
|
|
|
/*
|
|
|
|
* If EXEC_BACKEND is defined, the postmaster uses an alternative method for
|
|
|
|
* starting subprocesses: Instead of simply using fork(), as is standard on
|
|
|
|
* Unix platforms, it uses fork()+exec() or something equivalent on Windows,
|
|
|
|
* as well as lots of extra code to bring the required global state to those
|
|
|
|
* new processes. This must be enabled on Windows (because there is no
|
|
|
|
* fork()). On other platforms, it's only useful for verifying those
|
|
|
|
* otherwise Windows-specific code paths.
|
|
|
|
*/
|
|
|
|
#if defined(WIN32) && !defined(__CYGWIN__)
|
|
|
|
#define EXEC_BACKEND
|
|
|
|
#endif
|
|
|
|
|
2009-01-11 19:02:17 +01:00
|
|
|
/*
|
|
|
|
* USE_POSIX_FADVISE controls whether Postgres will attempt to use the
|
|
|
|
* posix_fadvise() kernel call. Usually the automatic configure tests are
|
|
|
|
* sufficient, but some older Linux distributions had broken versions of
|
|
|
|
* posix_fadvise(). If necessary you can remove the #define here.
|
|
|
|
*/
|
|
|
|
#if HAVE_DECL_POSIX_FADVISE && defined(HAVE_POSIX_FADVISE)
|
|
|
|
#define USE_POSIX_FADVISE
|
|
|
|
#endif
|
|
|
|
|
2009-01-12 06:10:45 +01:00
|
|
|
/*
|
|
|
|
* USE_PREFETCH code should be compiled only if we have a way to implement
|
|
|
|
* prefetching. (This is decoupled from USE_POSIX_FADVISE because there
|
2018-09-28 22:12:13 +02:00
|
|
|
* might in future be support for alternative low-level prefetch APIs.
|
|
|
|
* If you change this, you probably need to adjust the error message in
|
|
|
|
* check_effective_io_concurrency.)
|
2009-01-12 06:10:45 +01:00
|
|
|
*/
|
|
|
|
#ifdef USE_POSIX_FADVISE
|
|
|
|
#define USE_PREFETCH
|
|
|
|
#endif
|
|
|
|
|
2016-11-26 00:36:10 +01:00
|
|
|
/*
|
|
|
|
* Default and maximum values for backend_flush_after, bgwriter_flush_after
|
|
|
|
* and checkpoint_flush_after; measured in blocks. Currently, these are
|
|
|
|
* enabled by default if sync_file_range() exists, ie, only on Linux. Perhaps
|
|
|
|
* we could also enable by default if we have mmap and msync(MS_ASYNC)?
|
|
|
|
*/
|
|
|
|
#ifdef HAVE_SYNC_FILE_RANGE
|
|
|
|
#define DEFAULT_BACKEND_FLUSH_AFTER 0 /* never enabled by default */
|
|
|
|
#define DEFAULT_BGWRITER_FLUSH_AFTER 64
|
|
|
|
#define DEFAULT_CHECKPOINT_FLUSH_AFTER 32
|
|
|
|
#else
|
|
|
|
#define DEFAULT_BACKEND_FLUSH_AFTER 0
|
|
|
|
#define DEFAULT_BGWRITER_FLUSH_AFTER 0
|
|
|
|
#define DEFAULT_CHECKPOINT_FLUSH_AFTER 0
|
|
|
|
#endif
|
|
|
|
/* upper limit for all three variables */
|
|
|
|
#define WRITEBACK_MAX_PENDING_FLUSHES 256
|
|
|
|
|
Break out OpenSSL-specific code to separate files.
This refactoring is in preparation for adding support for other SSL
implementations, with no user-visible effects. There are now two #defines,
USE_OPENSSL which is defined when building with OpenSSL, and USE_SSL which
is defined when building with any SSL implementation. Currently, OpenSSL is
the only implementation so the two #defines go together, but USE_SSL is
supposed to be used for implementation-independent code.
The libpq SSL code is changed to use a custom BIO, which does all the raw
I/O, like we've been doing in the backend for a long time. That makes it
possible to use MSG_NOSIGNAL to block SIGPIPE when using SSL, which avoids
a couple of syscall for each send(). Probably doesn't make much performance
difference in practice - the SSL encryption is expensive enough to mask the
effect - but it was a natural result of this refactoring.
Based on a patch by Martijn van Oosterhout from 2006. Briefly reviewed by
Alvaro Herrera, Andreas Karlsson, Jeff Janes.
2014-08-11 10:54:19 +02:00
|
|
|
/*
|
|
|
|
* USE_SSL code should be compiled only when compiling with an SSL
|
2021-02-10 07:28:19 +01:00
|
|
|
* implementation.
|
Break out OpenSSL-specific code to separate files.
This refactoring is in preparation for adding support for other SSL
implementations, with no user-visible effects. There are now two #defines,
USE_OPENSSL which is defined when building with OpenSSL, and USE_SSL which
is defined when building with any SSL implementation. Currently, OpenSSL is
the only implementation so the two #defines go together, but USE_SSL is
supposed to be used for implementation-independent code.
The libpq SSL code is changed to use a custom BIO, which does all the raw
I/O, like we've been doing in the backend for a long time. That makes it
possible to use MSG_NOSIGNAL to block SIGPIPE when using SSL, which avoids
a couple of syscall for each send(). Probably doesn't make much performance
difference in practice - the SSL encryption is expensive enough to mask the
effect - but it was a natural result of this refactoring.
Based on a patch by Martijn van Oosterhout from 2006. Briefly reviewed by
Alvaro Herrera, Andreas Karlsson, Jeff Janes.
2014-08-11 10:54:19 +02:00
|
|
|
*/
|
|
|
|
#ifdef USE_OPENSSL
|
|
|
|
#define USE_SSL
|
|
|
|
#endif
|
|
|
|
|
2003-04-07 00:45:23 +02:00
|
|
|
/*
|
|
|
|
* This is the default directory in which AF_UNIX socket files are
|
|
|
|
* placed. Caution: changing this risks breaking your existing client
|
|
|
|
* applications, which are likely to continue to look in the old
|
|
|
|
* directory. But if you just hate the idea of sockets in /tmp,
|
|
|
|
* here's where to twiddle it. You can also override this at runtime
|
|
|
|
* with the postmaster's -k switch.
|
2020-01-31 16:26:12 +01:00
|
|
|
*
|
|
|
|
* If set to an empty string, then AF_UNIX sockets are not used by default: A
|
|
|
|
* server will not create an AF_UNIX socket unless the run-time configuration
|
|
|
|
* is changed, a client will connect via TCP/IP by default and will only use
|
|
|
|
* an AF_UNIX socket if one is explicitly specified.
|
2020-03-28 13:01:42 +01:00
|
|
|
*
|
|
|
|
* This is done by default on Windows because there is no good standard
|
|
|
|
* location for AF_UNIX sockets and many installations on Windows don't
|
|
|
|
* support them yet.
|
2003-04-07 00:45:23 +02:00
|
|
|
*/
|
2020-03-28 13:01:42 +01:00
|
|
|
#ifndef WIN32
|
2003-04-07 00:45:23 +02:00
|
|
|
#define DEFAULT_PGSOCKET_DIR "/tmp"
|
2020-03-28 13:01:42 +01:00
|
|
|
#else
|
|
|
|
#define DEFAULT_PGSOCKET_DIR ""
|
|
|
|
#endif
|
2003-04-07 00:45:23 +02:00
|
|
|
|
2014-07-17 12:42:08 +02:00
|
|
|
/*
|
|
|
|
* This is the default event source for Windows event log.
|
|
|
|
*/
|
|
|
|
#define DEFAULT_EVENT_SOURCE "PostgreSQL"
|
|
|
|
|
2013-09-04 22:14:33 +02:00
|
|
|
/*
|
2015-01-19 21:20:31 +01:00
|
|
|
* Assumed cache line size. This doesn't affect correctness, but can be used
|
|
|
|
* for low-level optimizations. Currently, this is used to pad some data
|
|
|
|
* structures in xlog.c, to ensure that highly-contended fields are on
|
|
|
|
* different cache lines. Too small a value can hurt performance due to false
|
|
|
|
* sharing, while the only downside of too large a value is a few bytes of
|
|
|
|
* wasted memory. The default is 128, which should be large enough for all
|
|
|
|
* supported platforms.
|
2013-09-04 22:14:33 +02:00
|
|
|
*/
|
2014-10-01 11:54:05 +02:00
|
|
|
#define PG_CACHE_LINE_SIZE 128
|
2013-09-04 22:14:33 +02:00
|
|
|
|
2023-04-08 00:38:09 +02:00
|
|
|
/*
|
|
|
|
* Assumed alignment requirement for direct I/O. 4K corresponds to common
|
|
|
|
* sector and memory page size.
|
|
|
|
*/
|
|
|
|
#define PG_IO_ALIGN_SIZE 4096
|
|
|
|
|
2003-04-07 00:45:23 +02:00
|
|
|
/*
|
|
|
|
*------------------------------------------------------------------------
|
|
|
|
* The following symbols are for enabling debugging code, not for
|
|
|
|
* controlling user-visible features or resource limits.
|
|
|
|
*------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
|
2013-06-27 02:00:08 +02:00
|
|
|
/*
|
|
|
|
* Include Valgrind "client requests", mostly in the memory allocator, so
|
|
|
|
* Valgrind understands PostgreSQL memory contexts. This permits detecting
|
2020-07-18 02:49:45 +02:00
|
|
|
* memory errors that Valgrind would not detect on a vanilla build. It also
|
|
|
|
* enables detection of buffer accesses that take place without holding a
|
2020-07-22 00:50:58 +02:00
|
|
|
* buffer pin (or without holding a buffer lock in the case of index access
|
|
|
|
* methods that superimpose their own custom client requests on top of the
|
2021-05-10 01:33:24 +02:00
|
|
|
* generic bufmgr.c requests).
|
2020-07-18 02:49:45 +02:00
|
|
|
*
|
|
|
|
* "make installcheck" is significantly slower under Valgrind. The client
|
2020-07-22 00:50:58 +02:00
|
|
|
* requests fall in hot code paths, so USE_VALGRIND slows execution by a few
|
|
|
|
* percentage points even when not run under Valgrind.
|
2013-06-27 02:00:08 +02:00
|
|
|
*
|
2021-05-10 01:33:24 +02:00
|
|
|
* Do not try to test the server under Valgrind without having built the
|
|
|
|
* server with USE_VALGRIND; else you will get false positives from sinval
|
|
|
|
* messaging (see comments in AddCatcacheInvalidationMessage). It's also
|
|
|
|
* important to use the suppression file src/tools/valgrind.supp to
|
|
|
|
* exclude other known false positives.
|
|
|
|
*
|
2013-06-27 02:00:08 +02:00
|
|
|
* You should normally use MEMORY_CONTEXT_CHECKING with USE_VALGRIND;
|
|
|
|
* instrumentation of repalloc() is inferior without it.
|
|
|
|
*/
|
|
|
|
/* #define USE_VALGRIND */
|
|
|
|
|
2003-04-07 00:45:23 +02:00
|
|
|
/*
|
|
|
|
* Define this to cause pfree()'d memory to be cleared immediately, to
|
2008-04-12 00:54:23 +02:00
|
|
|
* facilitate catching bugs that refer to already-freed values.
|
|
|
|
* Right now, this gets defined automatically if --enable-cassert.
|
2003-04-07 00:45:23 +02:00
|
|
|
*/
|
|
|
|
#ifdef USE_ASSERT_CHECKING
|
|
|
|
#define CLOBBER_FREED_MEMORY
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Define this to check memory allocation errors (scribbling on more
|
|
|
|
* bytes than were allocated). Right now, this gets defined
|
2013-06-27 02:00:08 +02:00
|
|
|
* automatically if --enable-cassert or USE_VALGRIND.
|
2003-04-07 00:45:23 +02:00
|
|
|
*/
|
2013-06-27 02:00:08 +02:00
|
|
|
#if defined(USE_ASSERT_CHECKING) || defined(USE_VALGRIND)
|
2003-04-07 00:45:23 +02:00
|
|
|
#define MEMORY_CONTEXT_CHECKING
|
|
|
|
#endif
|
|
|
|
|
2008-07-12 04:28:43 +02:00
|
|
|
/*
|
|
|
|
* Define this to cause palloc()'d memory to be filled with random data, to
|
|
|
|
* facilitate catching code that depends on the contents of uninitialized
|
|
|
|
* memory. Caution: this is horrendously expensive.
|
|
|
|
*/
|
|
|
|
/* #define RANDOMIZE_ALLOCATED_MEMORY */
|
|
|
|
|
2021-01-06 10:15:19 +01:00
|
|
|
/*
|
2021-07-13 21:01:01 +02:00
|
|
|
* For cache-invalidation debugging, define DISCARD_CACHES_ENABLED to enable
|
|
|
|
* use of the debug_discard_caches GUC to aggressively flush syscache/relcache
|
|
|
|
* entries whenever it's possible to deliver invalidations. See
|
|
|
|
* AcceptInvalidationMessages() in src/backend/utils/cache/inval.c for
|
2021-01-06 10:15:19 +01:00
|
|
|
* details.
|
|
|
|
*
|
|
|
|
* USE_ASSERT_CHECKING builds default to enabling this. It's possible to use
|
2021-07-13 21:01:01 +02:00
|
|
|
* DISCARD_CACHES_ENABLED without a cassert build and the implied
|
|
|
|
* CLOBBER_FREED_MEMORY and MEMORY_CONTEXT_CHECKING options, but it's unlikely
|
2021-01-06 10:15:19 +01:00
|
|
|
* to be as effective at identifying problems.
|
|
|
|
*/
|
2021-07-13 21:01:01 +02:00
|
|
|
/* #define DISCARD_CACHES_ENABLED */
|
2021-01-06 10:15:19 +01:00
|
|
|
|
2021-07-13 21:01:01 +02:00
|
|
|
#if defined(USE_ASSERT_CHECKING) && !defined(DISCARD_CACHES_ENABLED)
|
|
|
|
#define DISCARD_CACHES_ENABLED
|
2021-01-06 10:15:19 +01:00
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
2021-07-13 21:01:01 +02:00
|
|
|
* Backwards compatibility for the older compile-time-only clobber-cache
|
2021-01-06 10:15:19 +01:00
|
|
|
* macros.
|
|
|
|
*/
|
2021-07-13 21:01:01 +02:00
|
|
|
#if !defined(DISCARD_CACHES_ENABLED) && (defined(CLOBBER_CACHE_ALWAYS) || defined(CLOBBER_CACHE_RECURSIVELY))
|
|
|
|
#define DISCARD_CACHES_ENABLED
|
2021-01-06 10:15:19 +01:00
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Recover memory used for relcache entries when invalidated. See
|
2023-05-02 05:23:08 +02:00
|
|
|
* RelationBuildDesc() in src/backend/utils/cache/relcache.c.
|
2021-01-06 10:15:19 +01:00
|
|
|
*
|
2021-07-13 21:01:01 +02:00
|
|
|
* This is active automatically for clobber-cache builds when clobbering is
|
2021-01-06 10:15:19 +01:00
|
|
|
* active, but can be overridden here by explicitly defining
|
|
|
|
* RECOVER_RELATION_BUILD_MEMORY. Define to 1 to always free relation cache
|
|
|
|
* memory even when clobber is off, or to 0 to never free relation cache
|
|
|
|
* memory even when clobbering is on.
|
|
|
|
*/
|
|
|
|
/* #define RECOVER_RELATION_BUILD_MEMORY 0 */ /* Force disable */
|
|
|
|
/* #define RECOVER_RELATION_BUILD_MEMORY 1 */ /* Force enable */
|
|
|
|
|
2003-04-07 00:45:23 +02:00
|
|
|
/*
|
|
|
|
* Define this to force all parse and plan trees to be passed through
|
|
|
|
* copyObject(), to facilitate catching errors and omissions in
|
|
|
|
* copyObject().
|
|
|
|
*/
|
|
|
|
/* #define COPY_PARSE_PLAN_TREES */
|
|
|
|
|
Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers. This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.
To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.
A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime. To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.
In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
2018-09-18 23:11:54 +02:00
|
|
|
/*
|
|
|
|
* Define this to force all parse and plan trees to be passed through
|
|
|
|
* outfuncs.c/readfuncs.c, to facilitate catching errors and omissions in
|
|
|
|
* those modules.
|
|
|
|
*/
|
|
|
|
/* #define WRITE_READ_PARSE_PLAN_TREES */
|
|
|
|
|
2016-05-24 01:08:26 +02:00
|
|
|
/*
|
|
|
|
* Define this to force all raw parse trees for DML statements to be scanned
|
|
|
|
* by raw_expression_tree_walker(), to facilitate catching errors and
|
|
|
|
* omissions in that function.
|
|
|
|
*/
|
|
|
|
/* #define RAW_EXPRESSION_COVERAGE_TEST */
|
|
|
|
|
2003-04-07 00:45:23 +02:00
|
|
|
/*
|
|
|
|
* Enable debugging print statements for lock-related operations.
|
|
|
|
*/
|
|
|
|
/* #define LOCK_DEBUG */
|
|
|
|
|
2004-01-06 18:26:23 +01:00
|
|
|
/*
|
|
|
|
* Enable debugging print statements for WAL-related operations; see
|
|
|
|
* also the wal_debug GUC var.
|
|
|
|
*/
|
2014-06-17 07:49:20 +02:00
|
|
|
/* #define WAL_DEBUG */
|
2004-01-06 18:26:23 +01:00
|
|
|
|
2005-10-04 00:55:56 +02:00
|
|
|
/*
|
|
|
|
* Enable tracing of resource consumption during sort operations;
|
|
|
|
* see also the trace_sort GUC var. For 8.1 this is enabled by default.
|
|
|
|
*/
|
|
|
|
#define TRACE_SORT 1
|
|
|
|
|
2007-06-08 20:23:53 +02:00
|
|
|
/*
|
|
|
|
* Enable tracing of syncscan operations (see also the trace_syncscan GUC var).
|
|
|
|
*/
|
|
|
|
/* #define TRACE_SYNCSCAN */
|