2006-07-31 03:16:38 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* toasting.h
|
|
|
|
* This file provides some definitions to support creation of toast tables
|
|
|
|
*
|
Create a script that can renumber manually-assigned OIDs.
This commit adds a Perl script renumber_oids.pl, which can reassign a
range of manually-assigned OIDs to someplace else by modifying OID
fields of the catalog *.dat files and OID-assigning macros in the
catalog *.h files.
Up to now, we've encouraged new patches that need manually-assigned
OIDs to use OIDs just above the range of existing OIDs. Predictably,
this leads to patches stepping on each others' toes, as whichever
one gets committed first creates an OID conflict that other patch
author(s) have to resolve manually. With the availability of
renumber_oids.pl, we can eliminate a lot of this hassle.
The new project policy, therefore, is:
* Encourage new patches to use high OIDs (the documentation suggests
choosing a block of OIDs at random in 8000..9999).
* After feature freeze in each development cycle, run renumber_oids.pl
to move all such OIDs down to lower numbers, thus freeing the high OID
range for the next development cycle.
This plan should greatly reduce the risk of OID collisions between
concurrently-developed patches. Also, if such a collision happens
anyway, we have the option to resolve it without much effort by doing
an off-schedule OID renumbering to get the first-committed patch out
of the way. Or a patch author could use renumber_oids.pl to change
their patch's assignments without much pain.
This approach does put a premium on not hard-wiring any OID values
in places where renumber_oids.pl and genbki.pl can't fix them.
Project practice in that respect seems to be pretty good already,
but a follow-on patch will sand down some rough edges.
John Naylor and Tom Lane, per an idea of Peter Geoghegan's
Discussion: https://postgr.es/m/CAH2-WzmMTGMcPuph4OvsO7Ykut0AOCF_i-=eaochT0dd2BN9CQ@mail.gmail.com
2019-03-12 15:50:48 +01:00
|
|
|
* Caution: all #define's with numeric values in this file had better be
|
|
|
|
* object OIDs, else renumber_oids.pl might change them inappropriately.
|
|
|
|
*
|
2006-07-31 03:16:38 +02:00
|
|
|
*
|
2019-01-02 18:44:25 +01:00
|
|
|
* Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group
|
2006-07-31 03:16:38 +02:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/include/catalog/toasting.h
|
2006-07-31 03:16:38 +02:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#ifndef TOASTING_H
|
|
|
|
#define TOASTING_H
|
|
|
|
|
2014-04-06 17:13:43 +02:00
|
|
|
#include "storage/lock.h"
|
|
|
|
|
2006-07-31 03:16:38 +02:00
|
|
|
/*
|
|
|
|
* toasting.c prototypes
|
|
|
|
*/
|
2014-04-06 17:13:43 +02:00
|
|
|
extern void NewRelationCreateToastTable(Oid relOid, Datum reloptions);
|
|
|
|
extern void NewHeapCreateToastTable(Oid relOid, Datum reloptions,
|
2019-05-22 19:04:48 +02:00
|
|
|
LOCKMODE lockmode);
|
2014-04-06 17:13:43 +02:00
|
|
|
extern void AlterTableCreateToastTable(Oid relOid, Datum reloptions,
|
2019-05-22 19:04:48 +02:00
|
|
|
LOCKMODE lockmode);
|
2006-07-31 03:16:38 +02:00
|
|
|
extern void BootstrapToastTable(char *relName,
|
2019-05-22 19:04:48 +02:00
|
|
|
Oid toastOid, Oid toastIndexOid);
|
2006-07-31 03:16:38 +02:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This macro is just to keep the C compiler from spitting up on the
|
Replace our traditional initial-catalog-data format with a better design.
Historically, the initial catalog data to be installed during bootstrap
has been written in DATA() lines in the catalog header files. This had
lots of disadvantages: the format was badly underdocumented, it was
very difficult to edit the data in any mechanized way, and due to the
lack of any abstraction the data was verbose, hard to read/understand,
and easy to get wrong.
Hence, move this data into separate ".dat" files and represent it in a way
that can easily be read and rewritten by Perl scripts. The new format is
essentially "key => value" for each column; while it's a bit repetitive,
explicit labeling of each value makes the data far more readable and less
error-prone. Provide a way to abbreviate entries by omitting field values
that match a specified default value for their column. This allows removal
of a large amount of repetitive boilerplate and also lowers the barrier to
adding new columns.
Also teach genbki.pl how to translate symbolic OID references into
numeric OIDs for more cases than just "regproc"-like pg_proc references.
It can now do that for regprocedure-like references (thus solving the
problem that regproc is ambiguous for overloaded functions), operators,
types, opfamilies, opclasses, and access methods. Use this to turn
nearly all OID cross-references in the initial data into symbolic form.
This represents a very large step forward in readability and error
resistance of the initial catalog data. It should also reduce the
difficulty of renumbering OID assignments in uncommitted patches.
Also, solve the longstanding problem that frontend code that would like to
use OID macros and other information from the catalog headers often had
difficulty with backend-only code in the headers. To do this, arrange for
all generated macros, plus such other declarations as we deem fit, to be
placed in "derived" header files that are safe for frontend inclusion.
(Once clients migrate to using these pg_*_d.h headers, it will be possible
to get rid of the pg_*_fn.h headers, which only exist to quarantine code
away from clients. That is left for follow-on patches, however.)
The now-automatically-generated macros include the Anum_xxx and Natts_xxx
constants that we used to have to update by hand when adding or removing
catalog columns.
Replace the former manual method of generating OID macros for pg_type
entries with an automatic method, ensuring that all built-in types have
OID macros. (But note that this patch does not change the way that
OID macros for pg_proc entries are built and used. It's not clear that
making that match the other catalogs would be worth extra code churn.)
Add SGML documentation explaining what the new data format is and how to
work with it.
Despite being a very large change in the catalog headers, there is no
catversion bump here, because postgres.bki and related output files
haven't changed at all.
John Naylor, based on ideas from various people; review and minor
additional coding by me; previous review by Alvaro Herrera
Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
|
|
|
* upcoming commands for Catalog.pm.
|
2006-07-31 03:16:38 +02:00
|
|
|
*/
|
|
|
|
#define DECLARE_TOAST(name,toastoid,indexoid) extern int no_such_variable
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2010-01-05 02:06:57 +01:00
|
|
|
* What follows are lines processed by genbki.pl to create the statements
|
2006-07-31 03:16:38 +02:00
|
|
|
* the bootstrap parser will turn into BootstrapToastTable commands.
|
|
|
|
* Each line specifies the system catalog that needs a toast table,
|
|
|
|
* the OID to assign to the toast table, and the OID to assign to the
|
|
|
|
* toast table's index. The reason we hard-wire these OIDs is that we
|
|
|
|
* need stable OIDs for shared relations, and that includes toast tables
|
|
|
|
* of shared relations.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* normal catalogs */
|
Add toast tables to most system catalogs
It has been project policy to create toast tables only for those catalogs
that might reasonably need one. Since this judgment call can change over
time, just create one for every catalog, as this can be useful when
creating rather-long entries in catalogs, with recent examples being in
the shape of policy expressions or customly-formatted SCRAM verifiers.
To prevent circular dependencies and to avoid adding complexity to VACUUM
FULL logic, exclude pg_class, pg_attribute, and pg_index. Also, to
prevent pg_upgrade from seeing a non-empty new cluster, exclude
pg_largeobject and pg_largeobject_metadata from the set as large object
data is handled as user data. Those relations have no reason to use a
toast table anyway.
Author: Joe Conway, John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com
2018-07-20 00:43:41 +02:00
|
|
|
DECLARE_TOAST(pg_aggregate, 4159, 4160);
|
2006-10-04 02:30:14 +02:00
|
|
|
DECLARE_TOAST(pg_attrdef, 2830, 2831);
|
Add toast tables to most system catalogs
It has been project policy to create toast tables only for those catalogs
that might reasonably need one. Since this judgment call can change over
time, just create one for every catalog, as this can be useful when
creating rather-long entries in catalogs, with recent examples being in
the shape of policy expressions or customly-formatted SCRAM verifiers.
To prevent circular dependencies and to avoid adding complexity to VACUUM
FULL logic, exclude pg_class, pg_attribute, and pg_index. Also, to
prevent pg_upgrade from seeing a non-empty new cluster, exclude
pg_largeobject and pg_largeobject_metadata from the set as large object
data is handled as user data. Those relations have no reason to use a
toast table anyway.
Author: Joe Conway, John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com
2018-07-20 00:43:41 +02:00
|
|
|
DECLARE_TOAST(pg_collation, 4161, 4162);
|
2006-10-04 02:30:14 +02:00
|
|
|
DECLARE_TOAST(pg_constraint, 2832, 2833);
|
Add toast tables to most system catalogs
It has been project policy to create toast tables only for those catalogs
that might reasonably need one. Since this judgment call can change over
time, just create one for every catalog, as this can be useful when
creating rather-long entries in catalogs, with recent examples being in
the shape of policy expressions or customly-formatted SCRAM verifiers.
To prevent circular dependencies and to avoid adding complexity to VACUUM
FULL logic, exclude pg_class, pg_attribute, and pg_index. Also, to
prevent pg_upgrade from seeing a non-empty new cluster, exclude
pg_largeobject and pg_largeobject_metadata from the set as large object
data is handled as user data. Those relations have no reason to use a
toast table anyway.
Author: Joe Conway, John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com
2018-07-20 00:43:41 +02:00
|
|
|
DECLARE_TOAST(pg_default_acl, 4143, 4144);
|
2006-10-04 02:30:14 +02:00
|
|
|
DECLARE_TOAST(pg_description, 2834, 2835);
|
Add toast tables to most system catalogs
It has been project policy to create toast tables only for those catalogs
that might reasonably need one. Since this judgment call can change over
time, just create one for every catalog, as this can be useful when
creating rather-long entries in catalogs, with recent examples being in
the shape of policy expressions or customly-formatted SCRAM verifiers.
To prevent circular dependencies and to avoid adding complexity to VACUUM
FULL logic, exclude pg_class, pg_attribute, and pg_index. Also, to
prevent pg_upgrade from seeing a non-empty new cluster, exclude
pg_largeobject and pg_largeobject_metadata from the set as large object
data is handled as user data. Those relations have no reason to use a
toast table anyway.
Author: Joe Conway, John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com
2018-07-20 00:43:41 +02:00
|
|
|
DECLARE_TOAST(pg_event_trigger, 4145, 4146);
|
|
|
|
DECLARE_TOAST(pg_extension, 4147, 4148);
|
|
|
|
DECLARE_TOAST(pg_foreign_data_wrapper, 4149, 4150);
|
|
|
|
DECLARE_TOAST(pg_foreign_server, 4151, 4152);
|
|
|
|
DECLARE_TOAST(pg_foreign_table, 4153, 4154);
|
|
|
|
DECLARE_TOAST(pg_init_privs, 4155, 4156);
|
|
|
|
DECLARE_TOAST(pg_language, 4157, 4158);
|
|
|
|
DECLARE_TOAST(pg_namespace, 4163, 4164);
|
|
|
|
DECLARE_TOAST(pg_partitioned_table, 4165, 4166);
|
|
|
|
DECLARE_TOAST(pg_policy, 4167, 4168);
|
2006-10-04 02:30:14 +02:00
|
|
|
DECLARE_TOAST(pg_proc, 2836, 2837);
|
|
|
|
DECLARE_TOAST(pg_rewrite, 2838, 2839);
|
2010-09-28 13:03:10 +02:00
|
|
|
DECLARE_TOAST(pg_seclabel, 3598, 3599);
|
2006-10-04 02:30:14 +02:00
|
|
|
DECLARE_TOAST(pg_statistic, 2840, 2841);
|
Implement multivariate n-distinct coefficients
Add support for explicitly declared statistic objects (CREATE
STATISTICS), allowing collection of statistics on more complex
combinations that individual table columns. Companion commands DROP
STATISTICS and ALTER STATISTICS ... OWNER TO / SET SCHEMA / RENAME are
added too. All this DDL has been designed so that more statistic types
can be added later on, such as multivariate most-common-values and
multivariate histograms between columns of a single table, leaving room
for permitting columns on multiple tables, too, as well as expressions.
This commit only adds support for collection of n-distinct coefficient
on user-specified sets of columns in a single table. This is useful to
estimate number of distinct groups in GROUP BY and DISTINCT clauses;
estimation errors there can cause over-allocation of memory in hashed
aggregates, for instance, so it's a worthwhile problem to solve. A new
special pseudo-type pg_ndistinct is used.
(num-distinct estimation was deemed sufficiently useful by itself that
this is worthwhile even if no further statistic types are added
immediately; so much so that another version of essentially the same
functionality was submitted by Kyotaro Horiguchi:
https://postgr.es/m/20150828.173334.114731693.horiguchi.kyotaro@lab.ntt.co.jp
though this commit does not use that code.)
Author: Tomas Vondra. Some code rework by Álvaro.
Reviewed-by: Dean Rasheed, David Rowley, Kyotaro Horiguchi, Jeff Janes,
Ideriha Takeshi
Discussion: https://postgr.es/m/543AFA15.4080608@fuzzy.cz
https://postgr.es/m/20170320190220.ixlaueanxegqd5gr@alvherre.pgsql
2017-03-24 18:06:10 +01:00
|
|
|
DECLARE_TOAST(pg_statistic_ext, 3439, 3440);
|
Rework the pg_statistic_ext catalog
Since extended statistic got introduced in PostgreSQL 10, there was a
single catalog pg_statistic_ext storing both the definitions and built
statistic. That's however problematic when a user is supposed to have
access only to the definitions, but not to user data.
Consider for example pg_dump on a database with RLS enabled - if the
pg_statistic_ext catalog respects RLS (which it should, if it contains
user data), pg_dump would not see any records and the result would not
define any extended statistics. That would be a surprising behavior.
Until now this was not a pressing issue, because the existing types of
extended statistic (functional dependencies and ndistinct coefficients)
do not include any user data directly. This changed with introduction
of MCV lists, which do include most common combinations of values.
The easiest way to fix this is to split the pg_statistic_ext catalog
into two - one for definitions, one for the built statistic values.
The new catalog is called pg_statistic_ext_data, and we're maintaining
a 1:1 relationship with the old catalog - either there are matching
records in both catalogs, or neither of them.
Bumped CATVERSION due to changing system catalog definitions.
Author: Dean Rasheed, with improvements by me
Reviewed-by: Dean Rasheed, John Naylor
Discussion: https://postgr.es/m/CAEZATCUhT9rt7Ui%3DVdx4N%3D%3DVV5XOK5dsXfnGgVOz_JhAicB%3DZA%40mail.gmail.com
2019-06-13 17:19:21 +02:00
|
|
|
DECLARE_TOAST(pg_statistic_ext_data, 3430, 3431);
|
2009-11-20 21:38:12 +01:00
|
|
|
DECLARE_TOAST(pg_trigger, 2336, 2337);
|
Add toast tables to most system catalogs
It has been project policy to create toast tables only for those catalogs
that might reasonably need one. Since this judgment call can change over
time, just create one for every catalog, as this can be useful when
creating rather-long entries in catalogs, with recent examples being in
the shape of policy expressions or customly-formatted SCRAM verifiers.
To prevent circular dependencies and to avoid adding complexity to VACUUM
FULL logic, exclude pg_class, pg_attribute, and pg_index. Also, to
prevent pg_upgrade from seeing a non-empty new cluster, exclude
pg_largeobject and pg_largeobject_metadata from the set as large object
data is handled as user data. Those relations have no reason to use a
toast table anyway.
Author: Joe Conway, John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com
2018-07-20 00:43:41 +02:00
|
|
|
DECLARE_TOAST(pg_ts_dict, 4169, 4170);
|
|
|
|
DECLARE_TOAST(pg_type, 4171, 4172);
|
|
|
|
DECLARE_TOAST(pg_user_mapping, 4173, 4174);
|
2006-07-31 03:16:38 +02:00
|
|
|
|
|
|
|
/* shared catalogs */
|
Add toast tables to most system catalogs
It has been project policy to create toast tables only for those catalogs
that might reasonably need one. Since this judgment call can change over
time, just create one for every catalog, as this can be useful when
creating rather-long entries in catalogs, with recent examples being in
the shape of policy expressions or customly-formatted SCRAM verifiers.
To prevent circular dependencies and to avoid adding complexity to VACUUM
FULL logic, exclude pg_class, pg_attribute, and pg_index. Also, to
prevent pg_upgrade from seeing a non-empty new cluster, exclude
pg_largeobject and pg_largeobject_metadata from the set as large object
data is handled as user data. Those relations have no reason to use a
toast table anyway.
Author: Joe Conway, John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com
2018-07-20 00:43:41 +02:00
|
|
|
DECLARE_TOAST(pg_authid, 4175, 4176);
|
|
|
|
#define PgAuthidToastTable 4175
|
|
|
|
#define PgAuthidToastIndex 4176
|
|
|
|
DECLARE_TOAST(pg_database, 4177, 4178);
|
|
|
|
#define PgDatabaseToastTable 4177
|
|
|
|
#define PgDatabaseToastIndex 4178
|
2009-10-08 00:14:26 +02:00
|
|
|
DECLARE_TOAST(pg_db_role_setting, 2966, 2967);
|
|
|
|
#define PgDbRoleSettingToastTable 2966
|
|
|
|
#define PgDbRoleSettingToastIndex 2967
|
Add toast tables to most system catalogs
It has been project policy to create toast tables only for those catalogs
that might reasonably need one. Since this judgment call can change over
time, just create one for every catalog, as this can be useful when
creating rather-long entries in catalogs, with recent examples being in
the shape of policy expressions or customly-formatted SCRAM verifiers.
To prevent circular dependencies and to avoid adding complexity to VACUUM
FULL logic, exclude pg_class, pg_attribute, and pg_index. Also, to
prevent pg_upgrade from seeing a non-empty new cluster, exclude
pg_largeobject and pg_largeobject_metadata from the set as large object
data is handled as user data. Those relations have no reason to use a
toast table anyway.
Author: Joe Conway, John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com
2018-07-20 00:43:41 +02:00
|
|
|
DECLARE_TOAST(pg_pltemplate, 4179, 4180);
|
|
|
|
#define PgPlTemplateToastTable 4179
|
|
|
|
#define PgPlTemplateToastIndex 4180
|
|
|
|
DECLARE_TOAST(pg_replication_origin, 4181, 4182);
|
|
|
|
#define PgReplicationOriginToastTable 4181
|
|
|
|
#define PgReplicationOriginToastIndex 4182
|
|
|
|
DECLARE_TOAST(pg_shdescription, 2846, 2847);
|
|
|
|
#define PgShdescriptionToastTable 2846
|
|
|
|
#define PgShdescriptionToastIndex 2847
|
2015-03-22 03:14:49 +01:00
|
|
|
DECLARE_TOAST(pg_shseclabel, 4060, 4061);
|
|
|
|
#define PgShseclabelToastTable 4060
|
|
|
|
#define PgShseclabelToastIndex 4061
|
Add toast tables to most system catalogs
It has been project policy to create toast tables only for those catalogs
that might reasonably need one. Since this judgment call can change over
time, just create one for every catalog, as this can be useful when
creating rather-long entries in catalogs, with recent examples being in
the shape of policy expressions or customly-formatted SCRAM verifiers.
To prevent circular dependencies and to avoid adding complexity to VACUUM
FULL logic, exclude pg_class, pg_attribute, and pg_index. Also, to
prevent pg_upgrade from seeing a non-empty new cluster, exclude
pg_largeobject and pg_largeobject_metadata from the set as large object
data is handled as user data. Those relations have no reason to use a
toast table anyway.
Author: Joe Conway, John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com
2018-07-20 00:43:41 +02:00
|
|
|
DECLARE_TOAST(pg_subscription, 4183, 4184);
|
|
|
|
#define PgSubscriptionToastTable 4183
|
|
|
|
#define PgSubscriptionToastIndex 4184
|
|
|
|
DECLARE_TOAST(pg_tablespace, 4185, 4186);
|
|
|
|
#define PgTablespaceToastTable 4185
|
|
|
|
#define PgTablespaceToastIndex 4186
|
2006-07-31 03:16:38 +02:00
|
|
|
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
#endif /* TOASTING_H */
|