postgresql/src/test/regress/parallel_schedule

132 lines
4.9 KiB
Plaintext
Raw Normal View History

# ----------
2010-09-20 22:08:53 +02:00
# src/test/regress/parallel_schedule
#
# By convention, we put no more than twenty tests in any one parallel group;
# this limits the number of connections needed to run the tests.
# ----------
# run tablespace by itself, and first, because it forces a checkpoint;
# we'd prefer not to have checkpoints later in the tests because that
# interferes with crash-recovery testing.
test: tablespace
# ----------
# The first group of parallel tests
# ----------
test: boolean char name varchar text int2 int4 int8 oid float4 float8 bit numeric txid uuid enum money rangetypes pg_lsn regproc
# ----------
# The second group of parallel tests
# strings depends on char, varchar and text
# numerology depends on int2, int4, int8, float4, float8
2020-12-20 05:20:33 +01:00
# multirangetypes depends on rangetypes
# multirangetypes shouldn't be in the one group with type_sanity
# ----------
2020-12-20 05:20:33 +01:00
test: strings numerology point lseg line box path polygon circle date time timetz timestamp timestamptz interval inet macaddr macaddr8 tstypes multirangetypes
# ----------
# Another group of parallel tests
# geometry depends on point, lseg, box, path, polygon and circle
# horology depends on interval, timetz, timestamp, timestamptz
# ----------
test: geometry horology regex type_sanity opr_sanity misc_sanity comments expressions unicode xid mvcc
# ----------
# These four each depend on the previous one
# ----------
test: create_function_1
test: create_type
test: create_table
test: create_function_2
# ----------
# Load huge amounts of data
# We should split the data files into single files and then
# execute two copy tests parallel, to check that copy itself
# is concurrent safe.
# ----------
test: copy copyselect copydml insert insert_conflict
# ----------
# More groups of parallel tests
# ----------
test: create_misc create_operator create_procedure
# These depend on create_misc and create_operator
Split up a couple of long-running regression test scripts. The point of this change is to increase the potential for parallelism while running the core regression tests. Most people these days are using parallel testing modes on multi-core machines, so we might as well try a bit harder to keep multiple cores busy. Hence, a test that runs much longer than others in its parallel group is a candidate to be sub-divided. In this patch, create_index.sql and join.sql are split up. I haven't changed the content of the tests in any way, just moved them. I moved create_index.sql's SP-GiST-related tests into a new script create_index_spgist, and moved its btree multilevel page deletion test over to the existing script btree_index. (btree_index is a more natural home for that test, and it's shorter than others in its parallel group, so this doesn't hurt total runtime of that group.) There might be room for more aggressive splitting of create_index, but this is enough to improve matters considerably. Likewise, I moved join.sql's "exercises for the hash join code" into a new file join_hash. Those exercises contributed three-quarters of the script's runtime. Which might well be excessive ... but for the moment, I'm satisfied with shoving them into a different parallel group, where they can share runtime with the roughly-equally-lengthy gist test. (Note for anybody following along at home: there are interesting interactions between the runtimes of create_index and anything running in parallel with it, because the tests of CREATE INDEX CONCURRENTLY in that file will repeatedly block waiting for concurrent transactions to commit. As committed in this patch, create_index and create_index_spgist have roughly equal runtimes, but that's mostly an artifact of forced synchronization of the CONCURRENTLY tests; when run serially, create_index is much faster. A followup patch will reduce the runtime of create_index_spgist and thereby also create_index.) Discussion: https://postgr.es/m/735.1554935715@sss.pgh.pa.us
2019-04-11 22:15:54 +02:00
test: create_index create_index_spgist create_view index_including index_including_gist
# ----------
# Another group of parallel tests
# ----------
Paper over regression failures in infinite_recurse() on PPC64 Linux. Our infinite_recurse() test to verify sane stack-overrun behavior is affected by a bug of the Linux kernel on PPC64: it will get SIGSEGV if it receives a signal when the stack depth is (a) over 1MB and (b) within a few kB of filling the current physical stack allocation. See https://bugzilla.kernel.org/show_bug.cgi?id=205183. Since this test is a bit time-consuming and we run it in parallel with test scripts that do a lot of DDL, it can be expected to get an sinval catchup interrupt at some point, leading to failure if the timing is wrong. This has caused more than 100 buildfarm failures over the past year or so. While a fix exists for the kernel bug, it might be years before that propagates into all production kernels, particularly in some of the older distros we have in the buildfarm. For now, let's just back off and not run this test on Linux PPC64; that loses nothing in test coverage so far as our own code is concerned. To do that, split this test into a new script infinite_recurse.sql and skip the test when the platform name is powerpc64...-linux-gnu. Back-patch to v12. Branches before that have not been seen to get this failure. No doubt that's because the "errors" test was not run in parallel with other tests before commit 798070ec0, greatly reducing the odds of an sinval catchup being necessary. I also back-patched 3c8553547 into v12, just so the new regression script would look the same in all branches having it. Discussion: https://postgr.es/m/3479046.1602607848@sss.pgh.pa.us Discussion: https://postgr.es/m/20190723162703.GM22387%40telsasoft.com
2020-10-13 23:44:56 +02:00
test: create_aggregate create_function_3 create_cast constraints triggers select inherit typed_table vacuum drop_if_exists updatable_views roleattributes create_am hash_func errors infinite_recurse
# ----------
# sanity_check does a vacuum, affecting the sort order of SELECT *
# results. So it should not run parallel to other tests.
# ----------
test: sanity_check
# ----------
# Another group of parallel tests
# Note: the ignore: line does not run random, just mark it as ignorable
# ----------
ignore: random
test: select_into select_distinct select_distinct_on select_implicit select_having subselect union case join aggregates transactions random portals arrays btree_index hash_index update delete namespace prepared_xacts
# ----------
# Another group of parallel tests
# ----------
Split up a couple of long-running regression test scripts. The point of this change is to increase the potential for parallelism while running the core regression tests. Most people these days are using parallel testing modes on multi-core machines, so we might as well try a bit harder to keep multiple cores busy. Hence, a test that runs much longer than others in its parallel group is a candidate to be sub-divided. In this patch, create_index.sql and join.sql are split up. I haven't changed the content of the tests in any way, just moved them. I moved create_index.sql's SP-GiST-related tests into a new script create_index_spgist, and moved its btree multilevel page deletion test over to the existing script btree_index. (btree_index is a more natural home for that test, and it's shorter than others in its parallel group, so this doesn't hurt total runtime of that group.) There might be room for more aggressive splitting of create_index, but this is enough to improve matters considerably. Likewise, I moved join.sql's "exercises for the hash join code" into a new file join_hash. Those exercises contributed three-quarters of the script's runtime. Which might well be excessive ... but for the moment, I'm satisfied with shoving them into a different parallel group, where they can share runtime with the roughly-equally-lengthy gist test. (Note for anybody following along at home: there are interesting interactions between the runtimes of create_index and anything running in parallel with it, because the tests of CREATE INDEX CONCURRENTLY in that file will repeatedly block waiting for concurrent transactions to commit. As committed in this patch, create_index and create_index_spgist have roughly equal runtimes, but that's mostly an artifact of forced synchronization of the CONCURRENTLY tests; when run serially, create_index is much faster. A followup patch will reduce the runtime of create_index_spgist and thereby also create_index.) Discussion: https://postgr.es/m/735.1554935715@sss.pgh.pa.us
2019-04-11 22:15:54 +02:00
test: brin gin gist spgist privileges init_privs security_label collate matview lock replica_identity rowsecurity object_address tablesample groupingsets drop_operator password identity generated join_hash
BRIN bloom indexes Adds a BRIN opclass using a Bloom filter to summarize the range. Indexes using the new opclasses allow only equality queries (similar to hash indexes), but that works fine for data like UUID, MAC addresses etc. for which range queries are not very common. This also means the indexes work for data that is not well correlated to physical location within the table, or perhaps even entirely random (which is a common issue with existing BRIN minmax opclasses). It's possible to specify opclass parameters with the usual Bloom filter parameters, i.e. the desired false-positive rate and the expected number of distinct values per page range. CREATE TABLE t (a int); CREATE INDEX ON t USING brin (a int4_bloom_ops(false_positive_rate = 0.05, n_distinct_per_range = 100)); The opclasses do not operate on the indexed values directly, but compute a 32-bit hash first, and the Bloom filter is built on the hash value. Collisions should not be a huge issue though, as the number of distinct values in a page ranges is usually fairly small. Bump catversion, due to various catalog changes. Author: Tomas Vondra <tomas.vondra@postgresql.org> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Sokolov Yura <y.sokolov@postgrespro.ru> Reviewed-by: Nico Williams <nico@cryptonector.com> Reviewed-by: John Naylor <john.naylor@enterprisedb.com> Discussion: https://postgr.es/m/c1138ead-7668-f0e1-0638-c3be3237e812@2ndquadrant.com Discussion: https://postgr.es/m/5d78b774-7e9c-c94e-12cf-fef51cc89b1a%402ndquadrant.com
2021-03-26 13:35:29 +01:00
# ----------
# Additional BRIN tests
# ----------
test: brin_bloom brin_multi
BRIN bloom indexes Adds a BRIN opclass using a Bloom filter to summarize the range. Indexes using the new opclasses allow only equality queries (similar to hash indexes), but that works fine for data like UUID, MAC addresses etc. for which range queries are not very common. This also means the indexes work for data that is not well correlated to physical location within the table, or perhaps even entirely random (which is a common issue with existing BRIN minmax opclasses). It's possible to specify opclass parameters with the usual Bloom filter parameters, i.e. the desired false-positive rate and the expected number of distinct values per page range. CREATE TABLE t (a int); CREATE INDEX ON t USING brin (a int4_bloom_ops(false_positive_rate = 0.05, n_distinct_per_range = 100)); The opclasses do not operate on the indexed values directly, but compute a 32-bit hash first, and the Bloom filter is built on the hash value. Collisions should not be a huge issue though, as the number of distinct values in a page ranges is usually fairly small. Bump catversion, due to various catalog changes. Author: Tomas Vondra <tomas.vondra@postgresql.org> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Sokolov Yura <y.sokolov@postgrespro.ru> Reviewed-by: Nico Williams <nico@cryptonector.com> Reviewed-by: John Naylor <john.naylor@enterprisedb.com> Discussion: https://postgr.es/m/c1138ead-7668-f0e1-0638-c3be3237e812@2ndquadrant.com Discussion: https://postgr.es/m/5d78b774-7e9c-c94e-12cf-fef51cc89b1a%402ndquadrant.com
2021-03-26 13:35:29 +01:00
# ----------
# Another group of parallel tests
# ----------
test: create_table_like alter_generic alter_operator misc async dbsize misc_functions sysviews tsrf tid tidscan tidrangescan collate.icu.utf8 incremental_sort
# rules cannot run concurrently with any test that creates
# a view or rule in the public schema
# collate.*.utf8 tests cannot be run in parallel with each other
test: rules psql psql_crosstab amutils stats_ext collate.linux.utf8
# run by itself so it can run parallel workers
test: select_parallel
test: write_parallel
# no relation related tests can be put in this group
test: publication subscription
# ----------
# Another group of parallel tests
# ----------
Partial implementation of SQL/JSON path language SQL 2016 standards among other things contains set of SQL/JSON features for JSON processing inside of relational database. The core of SQL/JSON is JSON path language, allowing access parts of JSON documents and make computations over them. This commit implements partial support JSON path language as separate datatype called "jsonpath". The implementation is partial because it's lacking datetime support and suppression of numeric errors. Missing features will be added later by separate commits. Support of SQL/JSON features requires implementation of separate nodes, and it will be considered in subsequent patches. This commit includes following set of plain functions, allowing to execute jsonpath over jsonb values: * jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_match(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]). * jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]). This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb, jsonpath) correspondingly. These operators will have an index support (implemented in subsequent patches). Catversion bumped, to add new functions and operators. Code was written by Nikita Glukhov and Teodor Sigaev, revised by me. Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work was inspired by Oleg Bartunov. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
2019-03-16 10:15:37 +01:00
test: select_views portals_p2 foreign_key cluster dependency guc bitmapops combocid tsearch tsdicts foreign_data window xmlmap functional_deps advisory_lock indirect_toast equivclass
# ----------
# Another group of parallel tests (JSON related)
# ----------
test: json jsonb json_encoding jsonpath jsonpath_encoding jsonb_jsonpath
Basic partition-wise join functionality. Instead of joining two partitioned tables in their entirety we can, if it is an equi-join on the partition keys, join the matching partitions individually. This involves teaching the planner about "other join" rels, which are related to regular join rels in the same way that other member rels are related to baserels. This can use significantly more CPU time and memory than regular join planning, because there may now be a set of "other" rels not only for every base relation but also for every join relation. In most practical cases, this probably shouldn't be a problem, because (1) it's probably unusual to join many tables each with many partitions using the partition keys for all joins and (2) if you do that scenario then you probably have a big enough machine to handle the increased memory cost of planning and (3) the resulting plan is highly likely to be better, so what you spend in planning you'll make up on the execution side. All the same, for now, turn this feature off by default. Currently, we can only perform joins between two tables whose partitioning schemes are absolutely identical. It would be nice to cope with other scenarios, such as extra partitions on one side or the other with no match on the other side, but that will have to wait for a future patch. Ashutosh Bapat, reviewed and tested by Rajkumar Raghuwanshi, Amit Langote, Rafia Sabih, Thomas Munro, Dilip Kumar, Antonin Houska, Amit Khandekar, and by me. A few final adjustments by me. Discussion: http://postgr.es/m/CAFjFpRfQ8GrQvzp3jA2wnLqrHmaXna-urjm_UY9BqXj=EaDTSA@mail.gmail.com Discussion: http://postgr.es/m/CAFjFpRcitjfrULr5jfuKWRPsGUX0LQ0k8-yG0Qw2+1LBGNpMdw@mail.gmail.com
2017-10-06 17:11:10 +02:00
# ----------
# Another group of parallel tests
# NB: temp.sql does a reconnect which transiently uses 2 connections,
# so keep this parallel group to at most 19 tests
# ----------
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
test: plancache limit plpgsql copy2 temp domain rangefuncs prepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
# ----------
# Another group of parallel tests
# ----------
Add Result Cache executor node (take 2) Here we add a new executor node type named "Result Cache". The planner can include this node type in the plan to have the executor cache the results from the inner side of parameterized nested loop joins. This allows caching of tuples for sets of parameters so that in the event that the node sees the same parameter values again, it can just return the cached tuples instead of rescanning the inner side of the join all over again. Internally, result cache uses a hash table in order to quickly find tuples that have been previously cached. For certain data sets, this can significantly improve the performance of joins. The best cases for using this new node type are for join problems where a large portion of the tuples from the inner side of the join have no join partner on the outer side of the join. In such cases, hash join would have to hash values that are never looked up, thus bloating the hash table and possibly causing it to multi-batch. Merge joins would have to skip over all of the unmatched rows. If we use a nested loop join with a result cache, then we only cache tuples that have at least one join partner on the outer side of the join. The benefits of using a parameterized nested loop with a result cache increase when there are fewer distinct values being looked up and the number of lookups of each value is large. Also, hash probes to lookup the cache can be much faster than the hash probe in a hash join as it's common that the result cache's hash table is much smaller than the hash join's due to result cache only caching useful tuples rather than all tuples from the inner side of the join. This variation in hash probe performance is more significant when the hash join's hash table no longer fits into the CPU's L3 cache, but the result cache's hash table does. The apparent "random" access of hash buckets with each hash probe can cause a poor L3 cache hit ratio for large hash tables. Smaller hash tables generally perform better. The hash table used for the cache limits itself to not exceeding work_mem * hash_mem_multiplier in size. We maintain a dlist of keys for this cache and when we're adding new tuples and realize we've exceeded the memory budget, we evict cache entries starting with the least recently used ones until we have enough memory to add the new tuples to the cache. For parameterized nested loop joins, we now consider using one of these result cache nodes in between the nested loop node and its inner node. We determine when this might be useful based on cost, which is primarily driven off of what the expected cache hit ratio will be. Estimating the cache hit ratio relies on having good distinct estimates on the nested loop's parameters. For now, the planner will only consider using a result cache for parameterized nested loop joins. This works for both normal joins and also for LATERAL type joins to subqueries. It is possible to use this new node for other uses in the future. For example, to cache results from correlated subqueries. However, that's not done here due to some difficulties obtaining a distinct estimation on the outer plan to calculate the estimated cache hit ratio. Currently we plan the inner plan before planning the outer plan so there is no good way to know if a result cache would be useful or not since we can't estimate the number of times the subplan will be called until the outer plan is generated. The functionality being added here is newly introducing a dependency on the return value of estimate_num_groups() during the join search. Previously, during the join search, we only ever needed to perform selectivity estimations. With this commit, we need to use estimate_num_groups() in order to estimate what the hit ratio on the result cache will be. In simple terms, if we expect 10 distinct values and we expect 1000 outer rows, then we'll estimate the hit ratio to be 99%. Since cache hits are very cheap compared to scanning the underlying nodes on the inner side of the nested loop join, then this will significantly reduce the planner's cost for the join. However, it's fairly easy to see here that things will go bad when estimate_num_groups() incorrectly returns a value that's significantly lower than the actual number of distinct values. If this happens then that may cause us to make use of a nested loop join with a result cache instead of some other join type, such as a merge or hash join. Our distinct estimations have been known to be a source of trouble in the past, so the extra reliance on them here could cause the planner to choose slower plans than it did previous to having this feature. Distinct estimations are also fairly hard to estimate accurately when several tables have been joined already or when a WHERE clause filters out a set of values that are correlated to the expressions we're estimating the number of distinct value for. For now, the costing we perform during query planning for result caches does put quite a bit of faith in the distinct estimations being accurate. When these are accurate then we should generally see faster execution times for plans containing a result cache. However, in the real world, we may find that we need to either change the costings to put less trust in the distinct estimations being accurate or perhaps even disable this feature by default. There's always an element of risk when we teach the query planner to do new tricks that it decides to use that new trick at the wrong time and causes a regression. Users may opt to get the old behavior by turning the feature off using the enable_resultcache GUC. Currently, this is enabled by default. It remains to be seen if we'll maintain that setting for the release. Additionally, the name "Result Cache" is the best name I could think of for this new node at the time I started writing the patch. Nobody seems to strongly dislike the name. A few people did suggest other names but no other name seemed to dominate in the brief discussion that there was about names. Let's allow the beta period to see if the current name pleases enough people. If there's some consensus on a better name, then we can change it before the release. Please see the 2nd discussion link below for the discussion on the "Result Cache" name. Author: David Rowley Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu, Hou Zhijie Tested-By: Konstantin Knizhnik Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com
2021-04-02 03:10:56 +02:00
test: partition_join partition_prune reloptions hash_part indexing partition_aggregate partition_info tuplesort explain compression resultcache
# event triggers cannot run concurrently with any test that runs DDL
Build in some knowledge about foreign-key relationships in the catalogs. This follows in the spirit of commit dfb75e478, which created primary key and uniqueness constraints to improve the visibility of constraints imposed on the system catalogs. While our catalogs contain many foreign-key-like relationships, they don't quite follow SQL semantics, in that the convention for an omitted reference is to write zero not NULL. Plus, we have some cases in which there are arrays each of whose elements is supposed to be an FK reference; SQL has no way to model that. So we can't create actual foreign key constraints to describe the situation. Nonetheless, we can collect and use knowledge about these relationships. This patch therefore adds annotations to the catalog header files to declare foreign-key relationships. (The BKI_LOOKUP annotations cover simple cases, but we weren't previously distinguishing which such columns are allowed to contain zeroes; we also need new markings for multi-column FK references.) Then, Catalog.pm and genbki.pl are taught to collect this information into a table in a new generated header "system_fk_info.h". The only user of that at the moment is a new SQL function pg_get_catalog_foreign_keys(), which exposes the table to SQL. The oidjoins regression test is rewritten to use pg_get_catalog_foreign_keys() to find out which columns to check. Aside from removing the need for manual maintenance of that test script, this allows it to cover numerous relationships that were not checked by the old implementation based on findoidjoins. (As of this commit, 217 relationships are checked by the test, versus 181 before.) Discussion: https://postgr.es/m/3240355.1612129197@sss.pgh.pa.us
2021-02-02 23:11:55 +01:00
# oidjoins is read-only, though, and should run late for best coverage
test: event_trigger oidjoins
# this test also uses event triggers, so likewise run it by itself
test: fast_default
# run stats by itself because its delay may be insufficient under heavy load
test: stats