2014-11-30 03:55:00 +01:00
|
|
|
# src/test/modules/Makefile
|
|
|
|
|
|
|
|
subdir = src/test/modules
|
|
|
|
top_builddir = ../../..
|
|
|
|
include $(top_builddir)/src/Makefile.global
|
|
|
|
|
|
|
|
SUBDIRS = \
|
2015-08-13 19:41:52 +02:00
|
|
|
brin \
|
Keep track of transaction commit timestamps
Transactions can now set their commit timestamp directly as they commit,
or an external transaction commit timestamp can be fed from an outside
system using the new function TransactionTreeSetCommitTsData(). This
data is crash-safe, and truncated at Xid freeze point, same as pg_clog.
This module is disabled by default because it causes a performance hit,
but can be enabled in postgresql.conf requiring only a server restart.
A new test in src/test/modules is included.
Catalog version bumped due to the new subdirectory within PGDATA and a
couple of new SQL functions.
Authors: Álvaro Herrera and Petr Jelínek
Reviewed to varying degrees by Michael Paquier, Andres Freund, Robert
Haas, Amit Kapila, Fujii Masao, Jaime Casanova, Simon Riggs, Steven
Singer, Peter Eisentraut
2014-12-03 15:53:02 +01:00
|
|
|
commit_ts \
|
2020-08-07 20:30:41 +02:00
|
|
|
delay_execution \
|
2019-09-25 05:11:12 +02:00
|
|
|
dummy_index_am \
|
2014-11-30 03:55:00 +01:00
|
|
|
dummy_seclabel \
|
2021-03-15 22:13:42 +01:00
|
|
|
libpq_pipeline \
|
2020-08-18 04:10:50 +02:00
|
|
|
plsample \
|
Fix confusion in SP-GiST between attribute type and leaf storage type.
According to the documentation, the attType passed to the opclass
config function (and also relied on by the core code) is the type
of the heap column or expression being indexed. But what was
actually being passed was the type stored for the index column.
This made no difference for user-defined SP-GiST opclasses,
because we weren't allowing the STORAGE clause of CREATE OPCLASS
to be used, so the two types would be the same. But it's silly
not to allow that, seeing that the built-in poly_ops opclass
has a different value for opckeytype than opcintype, and that if you
want to do lossy storage then the types must really be different.
(Thus, user-defined opclasses doing lossy storage had to lie about
what type is in the index.) Hence, remove the restriction, and make
sure that we use the input column type not opckeytype where relevant.
For reasons of backwards compatibility with existing user-defined
opclasses, we can't quite insist that the specified leafType match
the STORAGE clause; instead just add an amvalidate() warning if
they don't match.
Also fix some bugs that would only manifest when trying to return
index entries when attType is different from attLeafType. It's not
too surprising that these have not been reported, because the only
usual reason for such a difference is to store the leaf value
lossily, rendering index-only scans impossible.
Add a src/test/modules module to exercise cases where attType is
different from attLeafType and yet index-only scan is supported.
Discussion: https://postgr.es/m/3728741.1617381471@sss.pgh.pa.us
2021-04-04 20:28:35 +02:00
|
|
|
spgist_name_ops \
|
2018-04-01 02:49:41 +02:00
|
|
|
test_bloomfilter \
|
2022-10-11 04:45:52 +02:00
|
|
|
test_copy_callbacks \
|
2022-11-15 22:10:27 +01:00
|
|
|
test_custom_rmgrs \
|
Allow on-the-fly capture of DDL event details
This feature lets user code inspect and take action on DDL events.
Whenever a ddl_command_end event trigger is installed, DDL actions
executed are saved to a list which can be inspected during execution of
a function attached to ddl_command_end.
The set-returning function pg_event_trigger_ddl_commands can be used to
list actions so captured; it returns data about the type of command
executed, as well as the affected object. This is sufficient for many
uses of this feature. For the cases where it is not, we also provide a
"command" column of a new pseudo-type pg_ddl_command, which is a
pointer to a C structure that can be accessed by C code. The struct
contains all the info necessary to completely inspect and even
reconstruct the executed command.
There is no actual deparse code here; that's expected to come later.
What we have is enough infrastructure that the deparsing can be done in
an external extension. The intention is that we will add some deparsing
code in a later release, as an in-core extension.
A new test module is included. It's probably insufficient as is, but it
should be sufficient as a starting point for a more complete and
future-proof approach.
Authors: Álvaro Herrera, with some help from Andres Freund, Ian Barwick,
Abhijit Menon-Sen.
Reviews by Andres Freund, Robert Haas, Amit Kapila, Michael Paquier,
Craig Ringer, David Steele.
Additional input from Chris Browne, Dimitri Fontaine, Stephen Frost,
Petr Jelínek, Tom Lane, Jim Nasby, Steven Singer, Pavel Stěhule.
Based on original work by Dimitri Fontaine, though I didn't use his
code.
Discussion:
https://www.postgresql.org/message-id/m2txrsdzxa.fsf@2ndQuadrant.fr
https://www.postgresql.org/message-id/20131108153322.GU5809@eldon.alvh.no-ip.org
https://www.postgresql.org/message-id/20150215044814.GL3391@alvh.no-ip.org
2015-05-12 00:14:31 +02:00
|
|
|
test_ddl_deparse \
|
2023-11-15 10:34:31 +01:00
|
|
|
test_dsa \
|
Introduce the dynamic shared memory registry.
Presently, the most straightforward way for a shared library to use
shared memory is to request it at server startup via a
shmem_request_hook, which requires specifying the library in
shared_preload_libraries. Alternatively, the library can create a
dynamic shared memory (DSM) segment, but absent a shared location
to store the segment's handle, other backends cannot use it. This
commit introduces a registry for DSM segments so that these other
backends can look up existing segments with a library-specified
string. This allows libraries to easily use shared memory without
needing to request it at server startup.
The registry is accessed via the new GetNamedDSMSegment() function.
This function handles allocating the segment and initializing it
via a provided callback. If another backend already created and
initialized the segment, it simply attaches the segment.
GetNamedDSMSegment() locks the registry appropriately to ensure
that only one backend initializes the segment and that all other
backends just attach it.
The registry itself is comprised of a dshash table that stores the
DSM segment handles keyed by a library-specified string.
Reviewed-by: Michael Paquier, Andrei Lepikhov, Nikita Malakhov, Robert Haas, Bharath Rupireddy, Zhang Mingli, Amul Sul
Discussion: https://postgr.es/m/20231205034647.GA2705267%40nathanxps13
2024-01-19 21:24:36 +01:00
|
|
|
test_dsm_registry \
|
2015-10-03 18:19:37 +02:00
|
|
|
test_extensions \
|
Fix overflow check and comment in GIN posting list encoding.
The comment did not match what the code actually did for integers with
the 43rd bit set. You get an integer like that, if you have a posting
list with two adjacent TIDs that are more than 2^31 blocks apart.
According to the comment, we would store that in 6 bytes, with no
continuation bit on the 6th byte, but in reality, the code encodes it
using 7 bytes, with a continuation bit on the 6th byte as normal.
The decoding routine also handled these 7-byte integers correctly, except
for an overflow check that assumed that one integer needs at most 6 bytes.
Fix the overflow check, and fix the comment to match what the code
actually does. Also fix the comment that claimed that there are 17 unused
bits in the 64-bit representation of an item pointer. In reality, there
are 64-32-11=21.
Fitting any item pointer into max 6 bytes was an important property when
this was written, because in the old pre-9.4 format, item pointers were
stored as plain arrays, with 6 bytes for every item pointer. The maximum
of 6 bytes per integer in the new format guaranteed that we could convert
any page from the old format to the new format after upgrade, so that the
new format was never larger than the old format. But we hardly need to
worry about that anymore, and running into that problem during upgrade,
where an item pointer is expanded from 6 to 7 bytes such that the data
doesn't fit on a page anymore, is implausible in practice anyway.
Backpatch to all supported versions.
This also includes a little test module to test these large distances
between item pointers, without requiring a 16 TB table. It is not
backpatched, I'm including it more for the benefit of future development
of new posting list formats.
Discussion: https://www.postgresql.org/message-id/33bfc20a-5c86-f50c-f5a5-58e9925d05ff%40iki.fi
Reviewed-by: Masahiko Sawada, Alexander Korotkov
2019-08-28 11:55:33 +02:00
|
|
|
test_ginpostinglist \
|
Add IntegerSet, to hold large sets of 64-bit ints efficiently.
The set is implemented as a B-tree, with a compact representation at leaf
items, using Simple-8b algorithm, so that clusters of nearby values use
less memory.
The IntegerSet isn't used for anything yet, aside from the test code, but
we have two patches in the works that would benefit from this: A patch to
allow GiST vacuum to delete empty pages, and a patch to reduce heap
VACUUM's memory usage, by storing the list of dead TIDs more efficiently
and lifting the 1 GB limit on its size.
This includes a unit test module, in src/test/modules/test_integerset.
It can be used to verify correctness, as a regression test, but if you run
it manully, it can also print memory usage and execution time of some of
the tests.
Author: Heikki Linnakangas, Andrey Borodin
Reviewed-by: Julien Rouhaud
Discussion: https://www.postgresql.org/message-id/b5e82599-1966-5783-733c-1a947ddb729f@iki.fi
2019-03-22 12:21:19 +01:00
|
|
|
test_integerset \
|
2024-03-11 04:10:14 +01:00
|
|
|
test_json_parser \
|
2022-08-03 18:49:04 +02:00
|
|
|
test_lfind \
|
2019-09-08 01:03:11 +02:00
|
|
|
test_misc \
|
2022-03-22 15:27:00 +01:00
|
|
|
test_oat_hooks \
|
Allow on-the-fly capture of DDL event details
This feature lets user code inspect and take action on DDL events.
Whenever a ddl_command_end event trigger is installed, DDL actions
executed are saved to a list which can be inspected during execution of
a function attached to ddl_command_end.
The set-returning function pg_event_trigger_ddl_commands can be used to
list actions so captured; it returns data about the type of command
executed, as well as the affected object. This is sufficient for many
uses of this feature. For the cases where it is not, we also provide a
"command" column of a new pseudo-type pg_ddl_command, which is a
pointer to a C structure that can be accessed by C code. The struct
contains all the info necessary to completely inspect and even
reconstruct the executed command.
There is no actual deparse code here; that's expected to come later.
What we have is enough infrastructure that the deparsing can be done in
an external extension. The intention is that we will add some deparsing
code in a later release, as an in-core extension.
A new test module is included. It's probably insufficient as is, but it
should be sufficient as a starting point for a more complete and
future-proof approach.
Authors: Álvaro Herrera, with some help from Andres Freund, Ian Barwick,
Abhijit Menon-Sen.
Reviews by Andres Freund, Robert Haas, Amit Kapila, Michael Paquier,
Craig Ringer, David Steele.
Additional input from Chris Browne, Dimitri Fontaine, Stephen Frost,
Petr Jelínek, Tom Lane, Jim Nasby, Steven Singer, Pavel Stěhule.
Based on original work by Dimitri Fontaine, though I didn't use his
code.
Discussion:
https://www.postgresql.org/message-id/m2txrsdzxa.fsf@2ndQuadrant.fr
https://www.postgresql.org/message-id/20131108153322.GU5809@eldon.alvh.no-ip.org
https://www.postgresql.org/message-id/20150215044814.GL3391@alvh.no-ip.org
2015-05-12 00:14:31 +02:00
|
|
|
test_parser \
|
Add TAP tests for pg_dump
This TAP test suite will create a new cluster, populate it based on
the 'create_sql' values in the '%tests' hash, run all of the runs
defined in the '%pgdump_runs' hash, and then for each test in the
'%tests' hash, compare each run's output the the regular expression
defined for the test under the 'like' and 'unlike' functions, as
appropriate.
While this test suite covers a fair bit of ground (67% of pg_dump.c
and quite a bit of the other files in src/bin/pg_dump), there is
still quite a bit which remains to be added to provide better code
coverage. Still, this is quite a bit better than we had, and has
found a few bugs already (note that the CREATE TRANSFORM test is
commented out, as it is currently failing).
Idea for using the TAP system from Tom, though all of the code is mine.
2016-05-06 20:06:50 +02:00
|
|
|
test_pg_dump \
|
2018-03-08 19:25:26 +01:00
|
|
|
test_predtest \
|
Add template for adaptive radix tree
This implements a radix tree data structure based on the design in
"The Adaptive Radix Tree: ARTful Indexing for Main-Memory Databases"
by Viktor Leis, Alfons Kemper, and ThomasNeumann, 2013. The main
technique that makes it adaptive is using several different node types,
each with a different capacity of elements, and a different algorithm
for accessing them. The nodes start small and grow/shrink as needed.
The main advantage over hash tables is efficient sorted iteration and
better memory locality when successive keys are lexicographically
close together. The implementation currently assumes 64-bit integer
keys, and traversing the tree is in general slower than a linear
probing hash table, so this is not a general-purpose associative array.
The paper describes two other techniques not implemented here,
namely "path compression" and "lazy expansion". These can further
reduce memory usage and speed up traversal, but the former would add
significant complexity and the latter requires storing the full key
with the value. We do trivially compress the path when leading bytes
of the key are zeros, however.
For value storage, we use "combined pointer/value slots", as
recommended in the paper. Values of size equal or smaller than the the
platform's pointer type are stored in the array of child pointers in
the last level node, while larger values are each stored in a separate
allocation. This is for now fixed at compile time, but it would be
fairly trivial to allow determining at runtime how variable-length
values are stored.
One innovation in our implementation compared to the ART paper is
decoupling the notion of node "size class" from "kind". The size
classes within a given node kind have the same underlying type, but
a variable capacity for children, so we can introduce additional node
sizes with little additional code.
To enable different use cases to specialize for different value types
and for shared/local memory, we use macro-templatized code generation
in the same manner as simplehash.h and sort_template.h.
Future commits will use this infrastructure for storing TIDs.
Patch by Masahiko Sawada and John Naylor, but a substantial amount of
credit is due to Andres Freund, whose proof-of-concept was a valuable
source of coding idioms and awareness of performance pitfalls, and
who reviewed earlier versions.
Discussion: https://postgr.es/m/CAD21AoAfOZvmfR0j8VmZorZjL7RhTiQdVttNuC4W-Shdc2a-AA%40mail.gmail.com
2024-03-07 06:40:11 +01:00
|
|
|
test_radixtree \
|
2017-09-10 19:26:46 +02:00
|
|
|
test_rbtree \
|
2021-01-06 16:51:14 +01:00
|
|
|
test_regex \
|
Make ResourceOwners more easily extensible.
Instead of having a separate array/hash for each resource kind, use a
single array and hash to hold all kinds of resources. This makes it
possible to introduce new resource "kinds" without having to modify
the ResourceOwnerData struct. In particular, this makes it possible
for extensions to register custom resource kinds.
The old approach was to have a small array of resources of each kind,
and if it fills up, switch to a hash table. The new approach also uses
an array and a hash, but now the array and the hash are used at the
same time. The array is used to hold the recently added resources, and
when it fills up, they are moved to the hash. This keeps the access to
recent entries fast, even when there are a lot of long-held resources.
All the resource-specific ResourceOwnerEnlarge*(),
ResourceOwnerRemember*(), and ResourceOwnerForget*() functions have
been replaced with three generic functions that take resource kind as
argument. For convenience, we still define resource-specific wrapper
macros around the generic functions with the old names, but they are
now defined in the source files that use those resource kinds.
The release callback no longer needs to call ResourceOwnerForget on
the resource being released. ResourceOwnerRelease unregisters the
resource from the owner before calling the callback. That needed some
changes in bufmgr.c and some other files, where releasing the
resources previously always called ResourceOwnerForget.
Each resource kind specifies a release priority, and
ResourceOwnerReleaseAll releases the resources in priority order. To
make that possible, we have to restrict what you can do between
phases. After calling ResourceOwnerRelease(), you are no longer
allowed to remember any more resources in it or to forget any
previously remembered resources by calling ResourceOwnerForget. There
was one case where that was done previously. At subtransaction commit,
AtEOSubXact_Inval() would handle the invalidation messages and call
RelationFlushRelation(), which temporarily increased the reference
count on the relation being flushed. We now switch to the parent
subtransaction's resource owner before calling AtEOSubXact_Inval(), so
that there is a valid ResourceOwner to temporarily hold that relcache
reference.
Other end-of-xact routines make similar calls to AtEOXact_Inval()
between release phases, but I didn't see any regression test failures
from those, so I'm not sure if they could reach a codepath that needs
remembering extra resources.
There were two exceptions to how the resource leak WARNINGs on commit
were printed previously: llvmjit silently released the context without
printing the warning, and a leaked buffer io triggered a PANIC. Now
everything prints a WARNING, including those cases.
Add tests in src/test/modules/test_resowner.
Reviewed-by: Aleksander Alekseev, Michael Paquier, Julien Rouhaud
Reviewed-by: Kyotaro Horiguchi, Hayato Kuroda, Álvaro Herrera, Zhihong Yu
Reviewed-by: Peter Eisentraut, Andres Freund
Discussion: https://www.postgresql.org/message-id/cbfabeb0-cd3c-e951-a572-19b365ed314d%40iki.fi
2023-11-08 12:30:50 +01:00
|
|
|
test_resowner \
|
RLS fixes, new hooks, and new test module
In prepend_row_security_policies(), defaultDeny was always true, so if
there were any hook policies, the RLS policies on the table would just
get discarded. Fixed to start off with defaultDeny as false and then
properly set later if we detect that only the default deny policy exists
for the internal policies.
The infinite recursion detection in fireRIRrules() didn't properly
manage the activeRIRs list in the case of WCOs, so it would incorrectly
report infinite recusion if the same relation with RLS appeared more
than once in the rtable, for example "UPDATE t ... FROM t ...".
Further, the RLS expansion code in fireRIRrules() was handling RLS in
the main loop through the rtable, which lead to RTEs being visited twice
if they contained sublink subqueries, which
prepend_row_security_policies() attempted to handle by exiting early if
the RTE already had securityQuals. That doesn't work, however, since
if the query involved a security barrier view on top of a table with
RLS, the RTE would already have securityQuals (from the view) by the
time fireRIRrules() was invoked, and so the table's RLS policies would
be ignored. This is fixed in fireRIRrules() by handling RLS in a
separate loop at the end, after dealing with any other sublink
subqueries, thus ensuring that each RTE is only visited once for RLS
expansion.
The inheritance planner code didn't correctly handle non-target
relations with RLS, which would get turned into subqueries during
planning. Thus an update of the form "UPDATE t1 ... FROM t2 ..." where
t1 has inheritance and t2 has RLS quals would fail. Fix by making sure
to copy in and update the securityQuals when they exist for non-target
relations.
process_policies() was adding WCOs to non-target relations, which is
unnecessary, and could lead to a lot of wasted time in the rewriter and
the planner. Fix by only adding WCO policies when working on the result
relation. Also in process_policies, we should be copying the USING
policies to the WITH CHECK policies on a per-policy basis, fix by moving
the copying up into the per-policy loop.
Lastly, as noted by Dean, we were simply adding policies returned by the
hook provided to the list of quals being AND'd, meaning that they would
actually restrict records returned and there was no option to have
internal policies and hook-based policies work together permissively (as
all internal policies currently work). Instead, explicitly add support
for both permissive and restrictive policies by having a hook for each
and combining the results appropriately. To ensure this is all done
correctly, add a new test module (test_rls_hooks) to test the various
combinations of internal, permissive, and restrictive hook policies.
Largely from Dean Rasheed (thanks!):
CAEZATCVmFUfUOwwhnBTcgi6AquyjQ0-1fyKd0T3xBWJvn+xsFA@mail.gmail.com
Author: Dean Rasheed, though I added the new hooks and test module.
2015-04-22 18:01:06 +02:00
|
|
|
test_rls_hooks \
|
2014-11-30 03:55:00 +01:00
|
|
|
test_shm_mq \
|
Add test module for SLRUs
This commit introduces a basic facility to test SLRUs, in terms of
initialization, page reads, writes, flushes, truncation and deletions,
using SQL wrappers around the APIs of slru.c. This should be easily
extensible at will, and it can be used as a starting point for someone
willing to implement an external module that makes use of SLRUs (LWLock
tranche registering and SLRU initialization particularly).
As this requires a loaded library, the tests use a custom configuration
file and are disabled under installcheck.
Author: Aleksander Alekseev, Michael Paquier
Reviewed-by: Pavel Borisov, Daniel Gustafsson, Noah Misch, Maxim Orlov
Discussion: https://postgr.es/m/CAJ7c6TOFoWcHOW4BVe3BG_uikCrO9B91ayx9d6rh5JZr_tPESg@mail.gmail.com
2022-11-16 01:52:21 +01:00
|
|
|
test_slru \
|
2024-03-21 02:08:42 +01:00
|
|
|
test_tidstore \
|
2019-06-30 18:51:08 +02:00
|
|
|
unsafe_tests \
|
2023-11-30 06:29:48 +01:00
|
|
|
worker_spi \
|
|
|
|
xid_wraparound
|
2014-11-30 03:55:00 +01:00
|
|
|
|
2024-01-22 05:32:28 +01:00
|
|
|
|
|
|
|
ifeq ($(enable_injection_points),yes)
|
2024-01-29 12:46:22 +01:00
|
|
|
SUBDIRS += injection_points gin
|
2024-01-22 05:32:28 +01:00
|
|
|
else
|
2024-01-29 12:46:22 +01:00
|
|
|
ALWAYS_SUBDIRS += injection_points gin
|
2024-01-22 05:32:28 +01:00
|
|
|
endif
|
|
|
|
|
2021-02-01 11:19:44 +01:00
|
|
|
ifeq ($(with_ssl),openssl)
|
2020-03-25 22:13:17 +01:00
|
|
|
SUBDIRS += ssl_passphrase_callback
|
2020-03-26 00:37:30 +01:00
|
|
|
else
|
|
|
|
ALWAYS_SUBDIRS += ssl_passphrase_callback
|
2020-03-25 22:13:17 +01:00
|
|
|
endif
|
|
|
|
|
2023-03-15 21:37:28 +01:00
|
|
|
# Test runs an LDAP server, so only run if ldap is in PG_TEST_EXTRA
|
|
|
|
ifeq ($(with_ldap),yes)
|
|
|
|
ifneq (,$(filter ldap,$(PG_TEST_EXTRA)))
|
|
|
|
SUBDIRS += ldap_password_func
|
|
|
|
else
|
|
|
|
ALWAYS_SUBDIRS += ldap_password_func
|
|
|
|
endif
|
|
|
|
else
|
|
|
|
ALWAYS_SUBDIRS += ldap_password_func
|
|
|
|
endif
|
|
|
|
|
2014-11-30 03:55:00 +01:00
|
|
|
$(recurse)
|
2020-03-26 00:37:30 +01:00
|
|
|
$(recurse_always)
|