postgresql/src/test/regress/parallel_schedule

111 lines
3.6 KiB
Plaintext
Raw Normal View History

# ----------
2010-09-20 22:08:53 +02:00
# src/test/regress/parallel_schedule
#
# By convention, we put no more than twenty tests in any one parallel group;
# this limits the number of connections needed to run the tests.
# ----------
# run tablespace by itself, and first, because it forces a checkpoint;
# we'd prefer not to have checkpoints later in the tests because that
# interferes with crash-recovery testing.
test: tablespace
# ----------
# The first group of parallel tests
# ----------
test: boolean char name varchar text int2 int4 int8 oid float4 float8 bit numeric txid uuid enum money rangetypes pg_lsn
# Depends on things setup during char, varchar and text
test: strings
# Depends on int2, int4, int8, float4, float8
test: numerology
# ----------
# The second group of parallel tests
# ----------
test: point lseg line box path polygon circle date time timetz timestamp timestamptz interval abstime reltime tinterval inet macaddr tstypes comments
# ----------
# Another group of parallel tests
# geometry depends on point, lseg, box, path, polygon and circle
# horology depends on interval, timetz, timestamp, timestamptz, reltime and abstime
# ----------
Fix regex back-references that are directly quantified with *. The syntax "\n*", that is a backref with a * quantifier directly applied to it, has never worked correctly in Spencer's library. This has been an open bug in the Tcl bug tracker since 2005: https://sourceforge.net/tracker/index.php?func=detail&aid=1115587&group_id=10894&atid=110894 The core of the problem is in parseqatom(), which first changes "\n*" to "\n+|" and then applies repeat() to the NFA representing the backref atom. repeat() thinks that any arc leading into its "rp" argument is part of the sub-NFA to be repeated. Unfortunately, since parseqatom() already created the arc that was intended to represent the empty bypass around "\n+", this arc gets moved too, so that it now leads into the state loop created by repeat(). Thus, what was supposed to be an "empty" bypass gets turned into something that represents zero or more repetitions of the NFA representing the backref atom. In the original example, in place of ^([bc])\1*$ we now have something that acts like ^([bc])(\1+|[bc]*)$ At runtime, the branch involving the actual backref fails, as it's supposed to, but then the other branch succeeds anyway. We could no doubt fix this by some rearrangement of the operations in parseqatom(), but that code is plenty ugly already, and what's more the whole business of converting "x*" to "x+|" probably needs to go away to fix another problem I'll mention in a moment. Instead, this patch suppresses the *-conversion when the target is a simple backref atom, leaving the case of m == 0 to be handled at runtime. This makes the patch in regcomp.c a one-liner, at the cost of having to tweak cbrdissect() a little. In the event I went a bit further than that and rewrote cbrdissect() to check all the string-length-related conditions before it starts comparing characters. It seems a bit stupid to possibly iterate through many copies of an n-character backreference, only to fail at the end because the target string's length isn't a multiple of n --- we could have found that out before starting. The existing coding could only be a win if integer division is hugely expensive compared to character comparison, but I don't know of any modern machine where that might be true. This does not fix all the problems with quantified back-references. In particular, the code is still broken for back-references that appear within a larger expression that is quantified (so that direct insertion of the quantification limits into the BACKREF node doesn't apply). I think fixing that will take some major surgery on the NFA code, specifically introducing an explicit iteration node type instead of trying to transform iteration into concatenation of modified regexps. Back-patch to all supported branches. In HEAD, also add a regression test case for this. (It may seem a bit silly to create a regression test file for just one test case; but I'm expecting that we will soon import a whole bunch of regex regression tests from Tcl, so might as well create the infrastructure now.)
2012-02-20 06:52:33 +01:00
test: geometry horology regex oidjoins type_sanity opr_sanity
# ----------
# These four each depend on the previous one
# ----------
2002-04-05 13:56:55 +02:00
test: insert
test: create_function_1
test: create_type
test: create_table
test: create_function_2
# ----------
# Load huge amounts of data
# We should split the data files into single files and then
# execute two copy tests parallel, to check that copy itself
# is concurrent safe.
# ----------
test: copy copyselect
# ----------
# More groups of parallel tests
# ----------
test: create_misc create_operator
# These depend on the above two
test: create_index create_view
# ----------
# Another group of parallel tests
# ----------
test: create_aggregate create_function_3 create_cast constraints triggers inherit create_table_like typed_table vacuum drop_if_exists updatable_views
# ----------
# sanity_check does a vacuum, affecting the sort order of SELECT *
# results. So it should not run parallel to other tests.
# ----------
test: sanity_check
# ----------
# Believe it or not, select creates a table, subsequent
# tests need.
# ----------
test: errors
test: select
2002-08-11 04:06:32 +02:00
ignore: random
# ----------
# Another group of parallel tests
# ----------
test: select_into select_distinct select_distinct_on select_implicit select_having subselect union case join aggregates transactions random portals arrays btree_index hash_index update namespace prepared_xacts delete
# ----------
# Another group of parallel tests
# ----------
test: privileges security_label collate matview lock replica_identity
# ----------
# Another group of parallel tests
# ----------
test: alter_generic misc psql async
# rules cannot run concurrently with any test that creates a view
test: rules
# event triggers cannot run concurrently with any test that runs DDL
test: event_trigger
# ----------
# Another group of parallel tests
# ----------
test: select_views portals_p2 foreign_key cluster dependency guc bitmapops combocid tsearch tsdicts foreign_data window xmlmap functional_deps advisory_lock json jsonb indirect_toast
# ----------
# Another group of parallel tests
# NB: temp.sql does a reconnect which transiently uses 2 connections,
# so keep this parallel group to at most 19 tests
# ----------
test: plancache limit plpgsql copy2 temp domain rangefuncs prepare without_oid conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
# run stats by itself because its delay may be insufficient under heavy load
test: stats