postgresql/src/test/regress/expected/sanity_check.out

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

251 lines
4.5 KiB
Plaintext
Raw Normal View History

2000-01-05 18:31:08 +01:00
VACUUM;
--
-- sanity check, if we don't have indices the test will take years to
-- complete. But skip TOAST relations (since they will have varying
-- names depending on the current OID counter) as well as temp tables
-- of other backends (to avoid timing-dependent behavior).
2000-01-05 18:31:08 +01:00
--
-- temporarily disable fancy output, so catalog changes create less diff noise
\a\t
2000-01-05 18:31:08 +01:00
SELECT relname, relhasindex
FROM pg_class c LEFT JOIN pg_namespace n ON n.oid = relnamespace
WHERE relkind IN ('r', 'p') AND (nspname ~ '^pg_temp_') IS NOT TRUE
1997-04-06 08:07:13 +02:00
ORDER BY relname;
a|f
a_star|f
aggtest|f
array_index_op_test|t
array_op_test|f
b|f
b_star|f
bit_defaults|f
box_tbl|f
bprime|f
bt_f8_heap|t
bt_i4_heap|t
bt_name_heap|t
bt_txt_heap|t
c|f
c_star|f
char_tbl|f
check2_tbl|f
check_tbl|f
circle_tbl|t
city|f
copy_tbl|f
d|f
d_star|f
date_tbl|f
default_tbl|f
defaultexpr_tbl|f
dept|f
dupindexcols|t
e_star|f
emp|f
equipment_r|f
extra_wide_table|f
f_star|f
fast_emp4000|t
float4_tbl|f
float8_tbl|f
func_index_heap|t
hash_f8_heap|t
hash_i4_heap|t
hash_name_heap|t
hash_txt_heap|t
hobbies_r|f
ihighway|t
inet_tbl|f
insert_tbl|f
int2_tbl|f
int4_tbl|f
int8_tbl|f
interval_tbl|f
Fix two bugs in merging of inherited CHECK constraints. Historically, we've allowed users to add a CHECK constraint to a child table and then add an identical CHECK constraint to the parent. This results in "merging" the two constraints so that the pre-existing child constraint ends up with both conislocal = true and coninhcount > 0. However, if you tried to do it in the other order, you got a duplicate constraint error. This is problematic for pg_dump, which needs to issue separated ADD CONSTRAINT commands in some cases, but has no good way to ensure that the constraints will be added in the required order. And it's more than a bit arbitrary, too. The goal of complaining about duplicated ADD CONSTRAINT commands can be served if we reject the case of adding a constraint when the existing one already has conislocal = true; but if it has conislocal = false, let's just make the ADD CONSTRAINT set conislocal = true. In this way, either order of adding the constraints has the same end result. Another problem was that the code allowed creation of a parent constraint marked convalidated that is merged with a child constraint that is !convalidated. In this case, an inheritance scan of the parent table could emit some rows violating the constraint condition, which would be an unexpected result given the marking of the parent constraint as validated. Hence, forbid merging of constraints in this case. (Note: valid child and not-valid parent seems fine, so continue to allow that.) Per report from Benedikt Grundmann. Back-patch to 9.2 where we introduced possibly-not-valid check constraints. The second bug obviously doesn't apply before that, and I think the first doesn't either, because pg_dump only gets into this situation when dealing with not-valid constraints. Report: <CADbMkNPT-Jz5PRSQ4RbUASYAjocV_KHUWapR%2Bg8fNvhUAyRpxA%40mail.gmail.com> Discussion: <22108.1475874586@sss.pgh.pa.us>
2016-10-09 01:29:27 +02:00
invalid_check_con|f
invalid_check_con_child|f
iportaltest|f
kd_point_tbl|t
line_tbl|f
log_table|f
lseg_tbl|f
main_table|f
mlparted|f
mlparted1|f
mlparted11|f
mlparted12|f
mlparted2|f
mlparted3|f
mlparted4|f
mlparted_def|f
mlparted_def1|f
mlparted_def2|f
mlparted_defd|f
money_data|f
num_data|f
num_exp_add|t
num_exp_div|t
num_exp_ln|t
num_exp_log10|t
num_exp_mul|t
num_exp_power_10_ln|t
num_exp_sqrt|t
num_exp_sub|t
num_input_test|f
num_result|f
num_typemod_test|f
2020-12-20 05:20:33 +01:00
nummultirange_test|t
numrange_test|t
onek|t
onek2|t
path_tbl|f
person|f
persons|f
persons2|t
persons3|t
pg_aggregate|t
pg_am|t
pg_amop|t
pg_amproc|t
pg_attrdef|t
pg_attribute|t
pg_auth_members|t
pg_authid|t
pg_cast|t
pg_class|t
pg_collation|t
pg_constraint|t
pg_conversion|t
pg_database|t
pg_db_role_setting|t
pg_default_acl|t
pg_depend|t
pg_description|t
pg_enum|t
pg_event_trigger|t
pg_extension|t
pg_foreign_data_wrapper|t
pg_foreign_server|t
pg_foreign_table|t
pg_index|t
pg_inherits|t
pg_init_privs|t
pg_language|t
pg_largeobject|t
pg_largeobject_metadata|t
pg_namespace|t
pg_opclass|t
pg_operator|t
pg_opfamily|t
Implement table partitioning. Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
2016-12-07 19:17:43 +01:00
pg_partitioned_table|t
pg_policy|t
pg_proc|t
pg_publication|t
pg_publication_namespace|t
pg_publication_rel|t
pg_range|t
Introduce replication progress tracking infrastructure. When implementing a replication solution ontop of logical decoding, two related problems exist: * How to safely keep track of replication progress * How to change replication behavior, based on the origin of a row; e.g. to avoid loops in bi-directional replication setups The solution to these problems, as implemented here, consist out of three parts: 1) 'replication origins', which identify nodes in a replication setup. 2) 'replication progress tracking', which remembers, for each replication origin, how far replay has progressed in a efficient and crash safe manner. 3) The ability to filter out changes performed on the behest of a replication origin during logical decoding; this allows complex replication topologies. E.g. by filtering all replayed changes out. Most of this could also be implemented in "userspace", e.g. by inserting additional rows contain origin information, but that ends up being much less efficient and more complicated. We don't want to require various replication solutions to reimplement logic for this independently. The infrastructure is intended to be generic enough to be reusable. This infrastructure also replaces the 'nodeid' infrastructure of commit timestamps. It is intended to provide all the former capabilities, except that there's only 2^16 different origins; but now they integrate with logical decoding. Additionally more functionality is accessible via SQL. Since the commit timestamp infrastructure has also been introduced in 9.5 (commit 73c986add) changing the API is not a problem. For now the number of origins for which the replication progress can be tracked simultaneously is determined by the max_replication_slots GUC. That GUC is not a perfect match to configure this, but there doesn't seem to be sufficient reason to introduce a separate new one. Bumps both catversion and wal page magic. Author: Andres Freund, with contributions from Petr Jelinek and Craig Ringer Reviewed-By: Heikki Linnakangas, Petr Jelinek, Robert Haas, Steve Singer Discussion: 20150216002155.GI15326@awork2.anarazel.de, 20140923182422.GA15776@alap3.anarazel.de, 20131114172632.GE7522@alap2.anarazel.de
2015-04-29 19:30:53 +02:00
pg_replication_origin|t
pg_rewrite|t
pg_seclabel|t
pg_sequence|t
pg_shdepend|t
pg_shdescription|t
pg_shseclabel|t
pg_statistic|t
Implement multivariate n-distinct coefficients Add support for explicitly declared statistic objects (CREATE STATISTICS), allowing collection of statistics on more complex combinations that individual table columns. Companion commands DROP STATISTICS and ALTER STATISTICS ... OWNER TO / SET SCHEMA / RENAME are added too. All this DDL has been designed so that more statistic types can be added later on, such as multivariate most-common-values and multivariate histograms between columns of a single table, leaving room for permitting columns on multiple tables, too, as well as expressions. This commit only adds support for collection of n-distinct coefficient on user-specified sets of columns in a single table. This is useful to estimate number of distinct groups in GROUP BY and DISTINCT clauses; estimation errors there can cause over-allocation of memory in hashed aggregates, for instance, so it's a worthwhile problem to solve. A new special pseudo-type pg_ndistinct is used. (num-distinct estimation was deemed sufficiently useful by itself that this is worthwhile even if no further statistic types are added immediately; so much so that another version of essentially the same functionality was submitted by Kyotaro Horiguchi: https://postgr.es/m/20150828.173334.114731693.horiguchi.kyotaro@lab.ntt.co.jp though this commit does not use that code.) Author: Tomas Vondra. Some code rework by Álvaro. Reviewed-by: Dean Rasheed, David Rowley, Kyotaro Horiguchi, Jeff Janes, Ideriha Takeshi Discussion: https://postgr.es/m/543AFA15.4080608@fuzzy.cz https://postgr.es/m/20170320190220.ixlaueanxegqd5gr@alvherre.pgsql
2017-03-24 18:06:10 +01:00
pg_statistic_ext|t
pg_statistic_ext_data|t
pg_subscription|t
pg_subscription_rel|t
pg_tablespace|t
pg_transform|t
pg_trigger|t
pg_ts_config|t
pg_ts_config_map|t
pg_ts_dict|t
pg_ts_parser|t
pg_ts_template|t
pg_type|t
pg_user_mapping|t
point_tbl|t
polygon_tbl|t
quad_box_tbl|t
quad_box_tbl_ord_seq1|f
quad_box_tbl_ord_seq2|f
quad_point_tbl|t
quad_poly_tbl|t
radix_text_tbl|t
ramp|f
real_city|f
2020-12-20 05:20:33 +01:00
reservations|f
road|t
shighway|t
slow_emp4000|f
sql_features|f
sql_implementation_info|f
sql_parts|f
sql_sizing|f
stud_emp|f
student|f
tab_core_types|f
tableam: introduce table AM infrastructure. This introduces the concept of table access methods, i.e. CREATE ACCESS METHOD ... TYPE TABLE and CREATE TABLE ... USING (storage-engine). No table access functionality is delegated to table AMs as of this commit, that'll be done in following commits. Subsequent commits will incrementally abstract table access functionality to be routed through table access methods. That change is too large to be reviewed & committed at once, so it'll be done incrementally. Docs will be updated at the end, as adding them incrementally would likely make them less coherent, and definitely is a lot more work, without a lot of benefit. Table access methods are specified similar to index access methods, i.e. pg_am.amhandler returns, as INTERNAL, a pointer to a struct with callbacks. In contrast to index AMs that struct needs to live as long as a backend, typically that's achieved by just returning a pointer to a constant struct. Psql's \d+ now displays a table's access method. That can be disabled with HIDE_TABLEAM=true, which is mainly useful so regression tests can be run against different AMs. It's quite possible that this behaviour still needs to be fine tuned. For now it's not allowed to set a table AM for a partitioned table, as we've not resolved how partitions would inherit that. Disallowing allows us to introduce, if we decide that's the way forward, such a behaviour without a compatibility break. Catversion bumped, to add the heap table AM and references to it. Author: Haribabu Kommi, Andres Freund, Alvaro Herrera, Dimitri Golgov and others Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql https://postgr.es/m/20190107235616.6lur25ph22u5u5av@alap3.anarazel.de https://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de
2019-03-06 18:54:38 +01:00
tableam_parted_a_heap2|f
tableam_parted_b_heap2|f
tableam_parted_c_heap2|f
tableam_parted_d_heap2|f
tableam_parted_heap2|f
tableam_tbl_heap2|f
tableam_tblas_heap2|f
tbl_include_box|t
tbl_include_box_pk|f
tbl_include_pk|t
tbl_include_reg|t
tbl_include_unique1|t
tbl_include_unique2|f
tenk1|t
tenk2|t
test_range_excl|t
test_range_gist|t
test_range_spgist|t
test_tsvector|f
testjsonb|f
text_tbl|f
textrange_test|t
time_tbl|f
timestamp_tbl|f
timestamptz_tbl|f
timetz_tbl|f
tmp|f
trigger_parted|t
trigger_parted_p1|t
trigger_parted_p1_1|t
trigger_parted_p2|t
trigger_parted_p2_2|t
varchar_tbl|f
view_base_table|t
-- restore normal output mode
\a\t
--
-- another sanity check: every system catalog that has OIDs should have
-- a unique index on OID. This ensures that the OIDs will be unique,
-- even after the OID counter wraps around.
-- We exclude non-system tables from the check by looking at nspname.
--
SELECT relname, nspname
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
FROM pg_class c LEFT JOIN pg_namespace n ON n.oid = relnamespace JOIN pg_attribute a ON (attrelid = c.oid AND attname = 'oid')
WHERE relkind = 'r' and c.oid < 16384
AND ((nspname ~ '^pg_') IS NOT FALSE)
AND NOT EXISTS (SELECT 1 FROM pg_index i WHERE indrelid = c.oid
AND indkey[0] = a.attnum AND indnatts = 1
AND indisunique AND indimmediate);
relname | nspname
---------+---------
(0 rows)
-- check that relations without storage don't have relfilenode
SELECT relname, relkind
FROM pg_class
WHERE relkind IN ('v', 'c', 'f', 'p', 'I')
AND relfilenode <> 0;
relname | relkind
---------+---------
(0 rows)