2000-09-29 19:17:41 +02:00
|
|
|
# ----------
|
2010-09-20 22:08:53 +02:00
|
|
|
# src/test/regress/parallel_schedule
|
2007-08-21 03:11:32 +02:00
|
|
|
#
|
|
|
|
# By convention, we put no more than twenty tests in any one parallel group;
|
|
|
|
# this limits the number of connections needed to run the tests.
|
|
|
|
# ----------
|
|
|
|
|
2009-08-24 05:10:16 +02:00
|
|
|
# run tablespace by itself, and first, because it forces a checkpoint;
|
|
|
|
# we'd prefer not to have checkpoints later in the tests because that
|
|
|
|
# interferes with crash-recovery testing.
|
|
|
|
test: tablespace
|
|
|
|
|
2007-08-21 03:11:32 +02:00
|
|
|
# ----------
|
|
|
|
# The first group of parallel tests
|
2000-09-29 19:17:41 +02:00
|
|
|
# ----------
|
2014-04-08 16:27:56 +02:00
|
|
|
test: boolean char name varchar text int2 int4 int8 oid float4 float8 bit numeric txid uuid enum money rangetypes pg_lsn regproc
|
2000-09-29 19:17:41 +02:00
|
|
|
|
|
|
|
# ----------
|
2007-08-21 03:11:32 +02:00
|
|
|
# The second group of parallel tests
|
Re-order some regression test scripts for more parallelism.
Move the strings, numerology, insert, insert_conflict, select and
errors tests to be parts of nearby parallel groups, instead of
executing by themselves. (Moving "select" required adjusting the
constraints test, which uses a table named "tmp" as select also
does. There don't seem to be any other conflicts.)
Move psql and stats_ext to the next parallel group, where the rules
test also has a long runtime. To make it safe to run stats_ext in
parallel with rules, I adjusted the latter to only dump views/rules
from the pg_catalog and public schemas, which was what it was doing
anyway. stats_ext makes some views in a transient schema, which now
will not affect rules.
Reorder serial_schedule to match parallel_schedule.
Discussion: https://postgr.es/m/735.1554935715@sss.pgh.pa.us
2019-04-12 00:16:50 +02:00
|
|
|
# strings depends on char, varchar and text
|
|
|
|
# numerology depends on int2, int4, int8, float4, float8
|
2000-09-29 19:17:41 +02:00
|
|
|
# ----------
|
2019-05-07 06:00:24 +02:00
|
|
|
test: strings numerology point lseg line box path polygon circle date time timetz timestamp timestamptz interval inet macaddr macaddr8 tstypes
|
2000-09-29 19:17:41 +02:00
|
|
|
|
2007-08-21 03:11:32 +02:00
|
|
|
# ----------
|
|
|
|
# Another group of parallel tests
|
|
|
|
# geometry depends on point, lseg, box, path, polygon and circle
|
2018-09-29 00:21:48 +02:00
|
|
|
# horology depends on interval, timetz, timestamp, timestamptz
|
2007-08-21 03:11:32 +02:00
|
|
|
# ----------
|
Add testing to detect errors of omission in "pin" dependency creation.
It's essential that initdb.c's setup_depend() scan each system catalog
that could contain objects that need to have "p" (pin) entries in pg_depend
or pg_shdepend. Forgetting to add that, either when a catalog is first
invented or when it first acquires DATA() entries, is an obvious bug
hazard. We can detect such omissions at reasonable cost by probing every
OID-containing system catalog to see whether the lowest-numbered OID in it
is pinned. If so, the catalog must have been properly accounted for in
setup_depend(). If the lowest OID is above FirstNormalObjectId then the
catalog must have been empty at the end of initdb, so it doesn't matter.
There are a small number of catalogs whose first entry is made later in
initdb than setup_depend(), resulting in nonempty expected output of the
test, but these can be manually inspected to see that they are OK. Any
future mistake of this ilk will manifest as a new entry in the test's
output.
Since pg_conversion is already in the test's output, add it to the set of
catalogs scanned by setup_depend(). That has no effect today (hence, no
catversion bump here) but it will protect us if we ever do add pin-worthy
conversions.
This test is very much like the catalog sanity checks embodied in
opr_sanity.sql and type_sanity.sql, but testing pg_depend doesn't seem to
fit naturally into either of those scripts' charters. Hence, invent a new
test script misc_sanity.sql, which can be a home for this as well as tests
on any other catalogs we might want in future.
Discussion: https://postgr.es/m/8068.1498155068@sss.pgh.pa.us
2017-06-23 17:03:04 +02:00
|
|
|
test: geometry horology regex oidjoins type_sanity opr_sanity misc_sanity comments expressions
|
2000-09-29 19:17:41 +02:00
|
|
|
|
|
|
|
# ----------
|
|
|
|
# These four each depend on the previous one
|
|
|
|
# ----------
|
|
|
|
test: create_function_1
|
|
|
|
test: create_type
|
|
|
|
test: create_table
|
|
|
|
test: create_function_2
|
|
|
|
|
|
|
|
# ----------
|
|
|
|
# Load huge amounts of data
|
|
|
|
# We should split the data files into single files and then
|
|
|
|
# execute two copy tests parallel, to check that copy itself
|
|
|
|
# is concurrent safe.
|
|
|
|
# ----------
|
Re-order some regression test scripts for more parallelism.
Move the strings, numerology, insert, insert_conflict, select and
errors tests to be parts of nearby parallel groups, instead of
executing by themselves. (Moving "select" required adjusting the
constraints test, which uses a table named "tmp" as select also
does. There don't seem to be any other conflicts.)
Move psql and stats_ext to the next parallel group, where the rules
test also has a long runtime. To make it safe to run stats_ext in
parallel with rules, I adjusted the latter to only dump views/rules
from the pg_catalog and public schemas, which was what it was doing
anyway. stats_ext makes some views in a transient schema, which now
will not affect rules.
Reorder serial_schedule to match parallel_schedule.
Discussion: https://postgr.es/m/735.1554935715@sss.pgh.pa.us
2019-04-12 00:16:50 +02:00
|
|
|
test: copy copyselect copydml insert insert_conflict
|
2000-09-29 19:17:41 +02:00
|
|
|
|
|
|
|
# ----------
|
2011-11-09 05:05:14 +01:00
|
|
|
# More groups of parallel tests
|
2000-09-29 19:17:41 +02:00
|
|
|
# ----------
|
2017-11-30 14:46:13 +01:00
|
|
|
test: create_misc create_operator create_procedure
|
Re-order some regression test scripts for more parallelism.
Move the strings, numerology, insert, insert_conflict, select and
errors tests to be parts of nearby parallel groups, instead of
executing by themselves. (Moving "select" required adjusting the
constraints test, which uses a table named "tmp" as select also
does. There don't seem to be any other conflicts.)
Move psql and stats_ext to the next parallel group, where the rules
test also has a long runtime. To make it safe to run stats_ext in
parallel with rules, I adjusted the latter to only dump views/rules
from the pg_catalog and public schemas, which was what it was doing
anyway. stats_ext makes some views in a transient schema, which now
will not affect rules.
Reorder serial_schedule to match parallel_schedule.
Discussion: https://postgr.es/m/735.1554935715@sss.pgh.pa.us
2019-04-12 00:16:50 +02:00
|
|
|
# These depend on create_misc and create_operator
|
Split up a couple of long-running regression test scripts.
The point of this change is to increase the potential for parallelism
while running the core regression tests. Most people these days are
using parallel testing modes on multi-core machines, so we might as
well try a bit harder to keep multiple cores busy. Hence, a test that
runs much longer than others in its parallel group is a candidate to
be sub-divided.
In this patch, create_index.sql and join.sql are split up.
I haven't changed the content of the tests in any way, just
moved them.
I moved create_index.sql's SP-GiST-related tests into a new script
create_index_spgist, and moved its btree multilevel page deletion test
over to the existing script btree_index. (btree_index is a more natural
home for that test, and it's shorter than others in its parallel group,
so this doesn't hurt total runtime of that group.) There might be
room for more aggressive splitting of create_index, but this is enough
to improve matters considerably.
Likewise, I moved join.sql's "exercises for the hash join code" into
a new file join_hash. Those exercises contributed three-quarters of
the script's runtime. Which might well be excessive ... but for the
moment, I'm satisfied with shoving them into a different parallel
group, where they can share runtime with the roughly-equally-lengthy
gist test.
(Note for anybody following along at home: there are interesting
interactions between the runtimes of create_index and anything running
in parallel with it, because the tests of CREATE INDEX CONCURRENTLY
in that file will repeatedly block waiting for concurrent transactions
to commit. As committed in this patch, create_index and
create_index_spgist have roughly equal runtimes, but that's mostly an
artifact of forced synchronization of the CONCURRENTLY tests; when run
serially, create_index is much faster. A followup patch will reduce
the runtime of create_index_spgist and thereby also create_index.)
Discussion: https://postgr.es/m/735.1554935715@sss.pgh.pa.us
2019-04-11 22:15:54 +02:00
|
|
|
test: create_index create_index_spgist create_view index_including index_including_gist
|
2000-09-29 19:17:41 +02:00
|
|
|
|
2011-11-09 05:05:14 +01:00
|
|
|
# ----------
|
|
|
|
# Another group of parallel tests
|
|
|
|
# ----------
|
2019-06-30 18:51:08 +02:00
|
|
|
test: create_aggregate create_function_3 create_cast constraints triggers select inherit typed_table vacuum drop_if_exists updatable_views roleattributes create_am hash_func errors
|
2011-11-09 05:05:14 +01:00
|
|
|
|
2000-09-29 19:17:41 +02:00
|
|
|
# ----------
|
|
|
|
# sanity_check does a vacuum, affecting the sort order of SELECT *
|
|
|
|
# results. So it should not run parallel to other tests.
|
|
|
|
# ----------
|
|
|
|
test: sanity_check
|
|
|
|
|
|
|
|
# ----------
|
2007-08-21 03:11:32 +02:00
|
|
|
# Another group of parallel tests
|
Re-order some regression test scripts for more parallelism.
Move the strings, numerology, insert, insert_conflict, select and
errors tests to be parts of nearby parallel groups, instead of
executing by themselves. (Moving "select" required adjusting the
constraints test, which uses a table named "tmp" as select also
does. There don't seem to be any other conflicts.)
Move psql and stats_ext to the next parallel group, where the rules
test also has a long runtime. To make it safe to run stats_ext in
parallel with rules, I adjusted the latter to only dump views/rules
from the pg_catalog and public schemas, which was what it was doing
anyway. stats_ext makes some views in a transient schema, which now
will not affect rules.
Reorder serial_schedule to match parallel_schedule.
Discussion: https://postgr.es/m/735.1554935715@sss.pgh.pa.us
2019-04-12 00:16:50 +02:00
|
|
|
# Note: the ignore: line does not run random, just mark it as ignorable
|
2000-09-29 19:17:41 +02:00
|
|
|
# ----------
|
Re-order some regression test scripts for more parallelism.
Move the strings, numerology, insert, insert_conflict, select and
errors tests to be parts of nearby parallel groups, instead of
executing by themselves. (Moving "select" required adjusting the
constraints test, which uses a table named "tmp" as select also
does. There don't seem to be any other conflicts.)
Move psql and stats_ext to the next parallel group, where the rules
test also has a long runtime. To make it safe to run stats_ext in
parallel with rules, I adjusted the latter to only dump views/rules
from the pg_catalog and public schemas, which was what it was doing
anyway. stats_ext makes some views in a transient schema, which now
will not affect rules.
Reorder serial_schedule to match parallel_schedule.
Discussion: https://postgr.es/m/735.1554935715@sss.pgh.pa.us
2019-04-12 00:16:50 +02:00
|
|
|
ignore: random
|
|
|
|
test: select_into select_distinct select_distinct_on select_implicit select_having subselect union case join aggregates transactions random portals arrays btree_index hash_index update delete namespace prepared_xacts
|
2000-09-29 19:17:41 +02:00
|
|
|
|
2011-03-20 19:35:39 +01:00
|
|
|
# ----------
|
|
|
|
# Another group of parallel tests
|
|
|
|
# ----------
|
Split up a couple of long-running regression test scripts.
The point of this change is to increase the potential for parallelism
while running the core regression tests. Most people these days are
using parallel testing modes on multi-core machines, so we might as
well try a bit harder to keep multiple cores busy. Hence, a test that
runs much longer than others in its parallel group is a candidate to
be sub-divided.
In this patch, create_index.sql and join.sql are split up.
I haven't changed the content of the tests in any way, just
moved them.
I moved create_index.sql's SP-GiST-related tests into a new script
create_index_spgist, and moved its btree multilevel page deletion test
over to the existing script btree_index. (btree_index is a more natural
home for that test, and it's shorter than others in its parallel group,
so this doesn't hurt total runtime of that group.) There might be
room for more aggressive splitting of create_index, but this is enough
to improve matters considerably.
Likewise, I moved join.sql's "exercises for the hash join code" into
a new file join_hash. Those exercises contributed three-quarters of
the script's runtime. Which might well be excessive ... but for the
moment, I'm satisfied with shoving them into a different parallel
group, where they can share runtime with the roughly-equally-lengthy
gist test.
(Note for anybody following along at home: there are interesting
interactions between the runtimes of create_index and anything running
in parallel with it, because the tests of CREATE INDEX CONCURRENTLY
in that file will repeatedly block waiting for concurrent transactions
to commit. As committed in this patch, create_index and
create_index_spgist have roughly equal runtimes, but that's mostly an
artifact of forced synchronization of the CONCURRENTLY tests; when run
serially, create_index is much faster. A followup patch will reduce
the runtime of create_index_spgist and thereby also create_index.)
Discussion: https://postgr.es/m/735.1554935715@sss.pgh.pa.us
2019-04-11 22:15:54 +02:00
|
|
|
test: brin gin gist spgist privileges init_privs security_label collate matview lock replica_identity rowsecurity object_address tablesample groupingsets drop_operator password identity generated join_hash
|
2012-10-15 18:18:52 +02:00
|
|
|
|
|
|
|
# ----------
|
|
|
|
# Another group of parallel tests
|
|
|
|
# ----------
|
2019-07-31 09:42:15 +02:00
|
|
|
test: create_table_like alter_generic alter_operator misc async dbsize misc_functions sysviews tsrf tidscan collate.icu.utf8
|
2011-03-20 19:35:39 +01:00
|
|
|
|
Re-order some regression test scripts for more parallelism.
Move the strings, numerology, insert, insert_conflict, select and
errors tests to be parts of nearby parallel groups, instead of
executing by themselves. (Moving "select" required adjusting the
constraints test, which uses a table named "tmp" as select also
does. There don't seem to be any other conflicts.)
Move psql and stats_ext to the next parallel group, where the rules
test also has a long runtime. To make it safe to run stats_ext in
parallel with rules, I adjusted the latter to only dump views/rules
from the pg_catalog and public schemas, which was what it was doing
anyway. stats_ext makes some views in a transient schema, which now
will not affect rules.
Reorder serial_schedule to match parallel_schedule.
Discussion: https://postgr.es/m/735.1554935715@sss.pgh.pa.us
2019-04-12 00:16:50 +02:00
|
|
|
# rules cannot run concurrently with any test that creates
|
|
|
|
# a view or rule in the public schema
|
2019-07-31 09:42:15 +02:00
|
|
|
# collate.*.utf8 tests cannot be run in parallel with each other
|
|
|
|
test: rules psql psql_crosstab amutils stats_ext collate.linux.utf8
|
2016-08-22 18:00:00 +02:00
|
|
|
|
|
|
|
# run by itself so it can run parallel workers
|
|
|
|
test: select_parallel
|
2017-10-05 17:34:38 +02:00
|
|
|
test: write_parallel
|
2000-09-29 19:17:41 +02:00
|
|
|
|
2017-01-19 18:00:00 +01:00
|
|
|
# no relation related tests can be put in this group
|
|
|
|
test: publication subscription
|
|
|
|
|
2000-09-29 19:17:41 +02:00
|
|
|
# ----------
|
2007-08-21 03:11:32 +02:00
|
|
|
# Another group of parallel tests
|
2000-09-29 19:17:41 +02:00
|
|
|
# ----------
|
Partial implementation of SQL/JSON path language
SQL 2016 standards among other things contains set of SQL/JSON features for
JSON processing inside of relational database. The core of SQL/JSON is JSON
path language, allowing access parts of JSON documents and make computations
over them. This commit implements partial support JSON path language as
separate datatype called "jsonpath". The implementation is partial because
it's lacking datetime support and suppression of numeric errors. Missing
features will be added later by separate commits.
Support of SQL/JSON features requires implementation of separate nodes, and it
will be considered in subsequent patches. This commit includes following
set of plain functions, allowing to execute jsonpath over jsonb values:
* jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]),
* jsonb_path_match(jsonb, jsonpath[, jsonb, bool]),
* jsonb_path_query(jsonb, jsonpath[, jsonb, bool]),
* jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]).
* jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]).
This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which
are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb,
jsonpath) correspondingly. These operators will have an index support
(implemented in subsequent patches).
Catversion bumped, to add new functions and operators.
Code was written by Nikita Glukhov and Teodor Sigaev, revised by me.
Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work
was inspired by Oleg Bartunov.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova
Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
2019-03-16 10:15:37 +01:00
|
|
|
test: select_views portals_p2 foreign_key cluster dependency guc bitmapops combocid tsearch tsdicts foreign_data window xmlmap functional_deps advisory_lock indirect_toast equivclass
|
|
|
|
|
|
|
|
# ----------
|
|
|
|
# Another group of parallel tests (JSON related)
|
|
|
|
# ----------
|
2019-03-25 13:43:56 +01:00
|
|
|
test: json jsonb json_encoding jsonpath jsonpath_encoding jsonb_jsonpath
|
Basic partition-wise join functionality.
Instead of joining two partitioned tables in their entirety we can, if
it is an equi-join on the partition keys, join the matching partitions
individually. This involves teaching the planner about "other join"
rels, which are related to regular join rels in the same way that
other member rels are related to baserels. This can use significantly
more CPU time and memory than regular join planning, because there may
now be a set of "other" rels not only for every base relation but also
for every join relation. In most practical cases, this probably
shouldn't be a problem, because (1) it's probably unusual to join many
tables each with many partitions using the partition keys for all
joins and (2) if you do that scenario then you probably have a big
enough machine to handle the increased memory cost of planning and (3)
the resulting plan is highly likely to be better, so what you spend in
planning you'll make up on the execution side. All the same, for now,
turn this feature off by default.
Currently, we can only perform joins between two tables whose
partitioning schemes are absolutely identical. It would be nice to
cope with other scenarios, such as extra partitions on one side or the
other with no match on the other side, but that will have to wait for
a future patch.
Ashutosh Bapat, reviewed and tested by Rajkumar Raghuwanshi, Amit
Langote, Rafia Sabih, Thomas Munro, Dilip Kumar, Antonin Houska, Amit
Khandekar, and by me. A few final adjustments by me.
Discussion: http://postgr.es/m/CAFjFpRfQ8GrQvzp3jA2wnLqrHmaXna-urjm_UY9BqXj=EaDTSA@mail.gmail.com
Discussion: http://postgr.es/m/CAFjFpRcitjfrULr5jfuKWRPsGUX0LQ0k8-yG0Qw2+1LBGNpMdw@mail.gmail.com
2017-10-06 17:11:10 +02:00
|
|
|
|
2000-09-29 19:17:41 +02:00
|
|
|
# ----------
|
2007-08-21 03:11:32 +02:00
|
|
|
# Another group of parallel tests
|
2008-12-30 18:11:26 +01:00
|
|
|
# NB: temp.sql does a reconnect which transiently uses 2 connections,
|
|
|
|
# so keep this parallel group to at most 19 tests
|
2000-09-29 19:17:41 +02:00
|
|
|
# ----------
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
test: plancache limit plpgsql copy2 temp domain rangefuncs prepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
|
2004-01-27 01:50:33 +01:00
|
|
|
|
2017-04-06 14:33:16 +02:00
|
|
|
# ----------
|
|
|
|
# Another group of parallel tests
|
|
|
|
# ----------
|
Clean up EXPLAIN's handling of per-worker details.
Previously, it was possible for EXPLAIN ANALYZE of a parallel query
to produce several different "Workers" fields for a single plan node,
because different portions of explain.c independently generated
per-worker data and wrapped that output in separate fields. This
is pretty bogus, especially for the structured output formats: even
if it's not technically illegal, most programs would have a hard time
dealing with such data.
To improve matters, add infrastructure that allows redirecting
per-worker values into a side data structure, and then collect that
data into a single "Workers" field after we've finished running all
the relevant code for a given plan node.
There are a few visible side-effects:
* In text format, instead of something like
Sort Method: external merge Disk: 4920kB
Worker 0: Sort Method: external merge Disk: 5880kB
Worker 1: Sort Method: external merge Disk: 5920kB
Buffers: shared hit=682 read=10188, temp read=1415 written=2101
Worker 0: actual time=130.058..130.324 rows=1324 loops=1
Buffers: shared hit=337 read=3489, temp read=505 written=739
Worker 1: actual time=130.273..130.512 rows=1297 loops=1
Buffers: shared hit=345 read=3507, temp read=505 written=744
you get
Sort Method: external merge Disk: 4920kB
Buffers: shared hit=682 read=10188, temp read=1415 written=2101
Worker 0: actual time=130.058..130.324 rows=1324 loops=1
Sort Method: external merge Disk: 5880kB
Buffers: shared hit=337 read=3489, temp read=505 written=739
Worker 1: actual time=130.273..130.512 rows=1297 loops=1
Sort Method: external merge Disk: 5920kB
Buffers: shared hit=345 read=3507, temp read=505 written=744
* When JIT is enabled, any relevant per-worker JIT stats are attached
to the child node of the Gather or Gather Merge node, which is where
the other per-worker output has always been. Previously, that info
was attached directly to a Gather node, or missed entirely for Gather
Merge.
* A query's summary JIT data no longer includes a bogus
"Worker Number: -1" field.
A notable code-level change is that indenting for lines of text-format
output should now be handled by calling "ExplainIndentText(es)",
instead of hard-wiring how much space to emit. This seems a good deal
cleaner anyway.
This patch also adds a new "explain.sql" regression test script that's
dedicated to testing EXPLAIN. There is more that can be done in that
line, certainly, but for now it just adds some coverage of the XML and
YAML output formats, which had been completely untested.
Although this is surely a bug fix, it's not clear that people would
be happy with rearranging EXPLAIN output in a minor release, so apply
to HEAD only.
Maciek Sakrejda and Tom Lane, based on an idea of Andres Freund's;
reviewed by Georgios Kokolatos
Discussion: https://postgr.es/m/CAOtHd0AvAA8CLB9Xz0wnxu1U=zJCKrr1r4QwwXi_kcQsHDVU=Q@mail.gmail.com
2020-01-26 00:16:42 +01:00
|
|
|
test: partition_join partition_prune reloptions hash_part indexing partition_aggregate partition_info tuplesort explain
|
2017-04-06 14:33:16 +02:00
|
|
|
|
2014-12-07 16:55:28 +01:00
|
|
|
# event triggers cannot run concurrently with any test that runs DDL
|
|
|
|
test: event_trigger
|
2018-04-20 23:27:56 +02:00
|
|
|
# this test also uses event triggers, so likewise run it by itself
|
|
|
|
test: fast_default
|
2014-12-07 16:55:28 +01:00
|
|
|
|
2004-01-27 01:50:33 +01:00
|
|
|
# run stats by itself because its delay may be insufficient under heavy load
|
|
|
|
test: stats
|